191 |
On Two Combinatorial Optimization Problems in Graphs: Grid Domination and RobustnessFata, Elaheh 26 August 2013 (has links)
In this thesis, we study two problems in combinatorial optimization, the dominating set problem and the robustness problem. In the first half of the thesis, we focus on the dominating set problem in grid graphs and present a distributed algorithm for finding near optimal dominating sets on grids. The dominating set problem is a well-studied mathematical problem in which the goal is to find a minimum size subset of vertices of a graph such that all vertices that are not in that set have a neighbor inside that set. We first provide a simpler proof for an existing centralized algorithm that constructs dominating sets on grids so that the size of the provided dominating set is upper-bounded by the ceiling of (m+2)(n+2)/5 for m by n grids and its difference from the optimal domination number of the grid is upper-bounded by five. We then design a distributed grid domination algorithm to locate mobile agents on a grid such that they constitute a dominating set for it. The basis for this algorithm is the centralized grid domination algorithm. We also generalize the centralized and distributed algorithms for the k-distance dominating set problem, where all grid vertices are within distance k of the vertices in the dominating set.
In the second half of the thesis, we study the computational complexity of checking a graph property known as robustness. This property plays a key role in diffusion of information in networks. A graph G=(V,E) is r-robust if for all pairs of nonempty and disjoint subsets of its vertices A,B, at least one of the subsets has a vertex that has at least r neighbors outside its containing set. In the robustness problem, the goal is to find the largest value of r such that a graph G is r-robust. We show that this problem is coNP-complete. En route to showing this, we define some new problems, including the decision version of the robustness problem and its relaxed version in which B=V \ A. We show these two problems are coNP-hard by showing that their complement problems are NP-hard.
|
192 |
The Consequences of stochastic gene expression in the nematode Caenorhabditis elegansBurga Ramos, Alejandro Raúl, 1985- 20 July 2012 (has links)
Genetically identical cells and organisms growing in homogenous environmental conditions can show significant phenotypic variation. Furthermore, mutations often have consequences that vary among individuals (incomplete penetrance). Biochemical processes such as those involved in gene expression are subjected to fluctuations due to their inherent probabilistic nature. However, it is not clear how these fluctuations affect multicellular organisms carrying mutations and if stochastic variation in gene expression among individuals could confer any advantage to populations. We have investigated the consequences of stochastic gene expression using the nematode Caenorhabditis elegans as a model. Here we show that inter-individual stochastic variation in the induction of both specific and more general buffering systems combine to determine the outcome of inherited mutations in each individual. Also, we demonstrate that genetic and environmental robustness are coupled in C. elegans. Individuals with higher induction of stress response are more robust to the effect of mutations, however they incur a fitness cost, thus suggesting that variation at the population level could be beneficial in unpredictable environments. / Células y organismos genéticamente idénticos y creciendo en un ambiente homogéneo pueden mostrar diferencias en sus fenotipos. Además, una misma mutación puede afectar de un modo distinto a individuos de una misma población. Es sabido que los procesos bioquímicos responsables de la expresión de genes están sujetos a fluctuaciones debido a su inherentemente naturaleza probabilística. Sin embargo, el rol que juegan estas fluctuaciones en individuos portadores de mutaciones ha sido poco estudiado, así cómo si la expresión estocástica de genes puede conferir alguna ventaja al nivel poblacional. Para investigar las consecuencias de la expresión estocástica de genes usamos como modelo al nemátodo Caenorhabditis elegans. En este trabajo demostramos que existe variación entre individuos en la inducción de mecanismos (tanto gen-específicos como globales) que confieren robustez al desarrollo. En consecuencia, diferencias fenotípicas entre mutantes están determinadas por su variación. También, demostramos que la robustez a perturbaciones genéticos y ambientales están estrechamente ligadas en C. elegans. Individuos que inducen estocásticamente una mayor respuesta a stress, están fenotípicamente mejor protegidos al efecto de mutaciones pero incurren en un costo reproductivo importante. Eso sugiere, que variaciones estocásticas al nivel poblacional pueden ser benéficas cuando las poblaciones afrontan ambientes impredecibles.
|
193 |
Role of network topology based methods in discovering novel gene-phenotype associationsGüney, Emre, 1983- 25 September 2012 (has links)
The cell is governed by the complex interactions among various types of biomolecules. Coupled with environmental factors, variations in DNA can cause alterations in normal gene function and lead to a disease condition. Often, such disease phenotypes involve coordinated dysregulation of multiple genes that implicate inter-connected pathways. Towards a better understanding and characterization of mechanisms underlying human diseases, here, I present GUILD, a network-based disease-gene prioritization framework. GUILD associates genes with diseases using the global topology of the protein-protein interaction network and an initial set of genes known to be implicated in the disease. Furthermore, I investigate the mechanistic relationships between disease-genes and explain the robustness emerging from these relationships. I also introduce GUILDify, an online and user-friendly tool which prioritizes genes for their association to any user-provided phenotype. Finally, I describe current state-of-the-art systems-biology approaches where network modeling has helped extending our view on diseases such as cancer. / La cèl•lula es regeix per interaccions complexes entre diferents tipus de biomolècules. Juntament amb factors ambientals, variacions en el DNA poden causar alteracions en la funció normal dels gens i provocar malalties. Sovint, aquests fenotips de malaltia involucren una desregulació coordinada de múltiples gens implicats en vies interconnectades. Per tal de comprendre i caracteritzar millor els mecanismes subjacents en malalties humanes, en aquesta tesis presento el programa GUILD, una plataforma que prioritza gens relacionats amb una malaltia en concret fent us de la topologia de xarxe. A partir d’un conjunt conegut de gens implicats en una malaltia, GUILD associa altres gens amb la malaltia mitjancant la topologia global de la xarxa d’interaccions de proteïnes. A més a més, analitzo les relacions mecanístiques entre gens associats a malalties i explico la robustesa es desprèn d’aquesta anàlisi. També presento GUILDify, un servidor web de fácil ús per la priorització de gens i la seva associació a un determinat fenotip. Finalment, descric els mètodes més recents en què el model•latge de xarxes ha ajudat extendre el coneixement sobre malalties complexes, com per exemple a càncer.
|
194 |
Integration Techniques of Fault Detection and Isolation Using Interval ObserversMeseguer Amela, Jordi 30 June 2009 (has links)
An interval observer has been illustrated to be a suitable approach to detect and isolate faults affecting complex dynamical industrial systems.
Concerning fault detection, interval observation is an appropriate passive robust strategy to generate an adaptive threshold to be used in residual
evaluation when model uncertainty is located in parameters (interval model). In such approach, the observer gain is a key parameter since it determines
the time evolution of the residual sensitivity to a fault and the minimum detectable fault. This thesis illustrates that the whole fault detection process is
ruled by the dynamics of the fault residual sensitivity functions and by the time evolution of the adaptive threshold related to the interval observer.
Besides, it must be taken into account that these two observer fault detection properties depend on the used observer gain. As a consequence, the
observer gain becomes a tuning parameter which allows enhancing the observer fault detection performance while avoiding some drawbacks related to
the analytical models, as the wrapping effect. In this thesis, the effect of the observer gain on fault detection and how this parameter can avoid some
observer drawbacks (i.e. wrapping effect) are deeply analyzed. One of the results of this analysis is the determination of the minimum detectable fault
function related to a given fault type. This function allows introducing a fault classification according to the fault detectability time evolution:
permanently (strongly) detected, non-permanently (weakly) detected or just non-detected. In this fault detection part of this thesis, two examples
have been used to illustrate the derived results: a mineral grinding-classification process and an industrial servo actuator.
Concerning the interface between fault detection and fault isolation, this thesis shows that both modules can not be considered separately since the
fault detection process has an important influence on the fault isolation result. This influence is not only due to the time evolution of the fault signals
generated by the fault detection module but also to the fact that the fault residual sensitivity functions determines the faults which are affecting a given
fault signal and the dynamics of this fault signal for each fault. This thesis illustrates this point suggesting that the interface between fault detection and
fault isolation must consider a set of fault signals properties: binary property, sign property, fault residual sensitivity property, occurrence order property
and occurrence time instant property. Moreover, as a result of the influence of the observer gain on the fault detection stage and on the fault residual
sensitivity functions, this thesis demonstrates that the observer gain has also a key role in the fault isolation module which might allow enhancing its
performance when this parameter is tuned properly (i.e. fault distinguishability may be increased). As a last point, this thesis analyzes the timed
discrete-event nature of the fault signals generated by the fault detection module. As a consequence, it suggests using timed discrete-event models to
model the fault isolation module. This thesis illustrates that this kind of models allow enhancing the fault isolation result. Moreover, as the monitored
system is modelled using an interval observer, this thesis shows as this qualitative fault isolation model can be built up on the grounds of this system
analytical model. Finally, the proposed fault isolation method is applied to detect and isolate faults of the Barcelona’s urban sewer system limnimeters.
Keywords: Fault Detection, Fault Diagnosis, Robustness, Observers, Intervals, Discrete-event Systems. / En la presente tesis se demuestra que el uso de observadores intervalares para detectar y aislar fallos en sistemas dinámicos complejos constituye
una estrategia apropiada. En la etapa de detección del fallo, dicha estrategia permite determinar el umbral adaptativo usado en la evaluación del
residuo (robustez pasiva). Dicha metodología, responde a la consideración de modelos con parámetros inciertos (modelos intervalares). En dicho
enfoque, la ganancia del observador es un parámetro clave que permite determinar la evolución temporal de la sensibilidad del residuo a un fallo y el
mínimo fallo detectable para un tipo de fallo determinado. Esta tesis establece que todo el proceso de detección de fallos viene determinado por la
dinámica de las funciones sensibilidad del residuo a los diferentes fallos considerados y por la evolución temporal del umbral adaptativo asociado al
observador intervalar. Además, se debe tener en cuenta que estas dos propiedades del observador respecto la detección de fallos dependen de la
ganancia del observador. En consecuencia, la ganancia del observador se convierte en el parámetro de diseño que permite mejorar las prestaciones
de dicho modelo respecto la detección de fallos mientras que permite evitar algunos defectos asociados al uso de modelos intervalares, como el efecto
wrapping. Uno de los resultados obtenidos es la determinación de la función fallo mínimo detectable para un tipo de fallo dado. Esta función permite
introducir una clasificación de los fallos en función de la evolución temporal de su detectabilidad: fallos permanentemente detectados, fallos no
permanentemente detectados y fallos no detectados. En la primera parte de la tesis centrada en la detección de fallos se utilizan dos ejemplos para
ilustrar los resultados obtenidos: un proceso de trituración y separación de minerales y un servoactuador industrial.
Respecto a la interfaz entre la etapa de detección de fallos y el proceso de aislamiento, esta tesis muestra que ambos módulos no pueden
considerarse separadamente dado que el proceso de detección tiene una importante influencia en el resultado de la etapa de aislamiento. Esta
influencia no es debida sólo a la evolución temporal de las señales de fallo generados por el módulo de detección sino también porque las funciones
sensibilidad del residuo a los diferentes posibles fallos determinan los fallos que afectan a un determinado señal de fallo y la dinámica de éste para
cada uno de los fallos. Esta tesis ilustra este punto sugiriendo que el interfaz entre detección y aislamiento del fallo debe considerar un conjunto de
propiedades de dichos señales: propiedad binaria, propiedad del signo, propiedad de la sensibilidad del residuo a un fallo dado, propiedad del orden
de aparición de las señales causados por los fallos y la propiedad del tiempo de aparición de estos. Además, como resultado de la influencia de la
ganancia del observador en la etapa de detección y en las funciones sensibilidad asociadas a los residuos, esta tesis ilustra que la ganancia del
observador tiene también un papel crucial en el módulo de aislamiento, el cual podría permitir mejorar el comportamiento de dicho módulo diseñando
éste parámetro del observador de forma adecuada (Ej. Incrementar la distinción de los fallos para su mejor aislamiento). Como último punto, esta tesis
analiza la naturaleza temporal de eventos discretos asociada a las señales de fallo generados por el módulo de detección. A consecuencia, se sugiere
usar modelos de eventos discretos temporales para modelizar el módulo de aislamiento del fallo. Esta tesis muestra que este tipo de modelos permite
mejorar el resultado de aislamiento del fallo. Además, dado que el sistema monitorizado es modelado usando un observador intervalar, esta tesis
muestra como este modelo cualitativo de aislamiento puede ser construido usando dicho modelo analítico del sistema. Finalmente, el método
propuesto de aislamiento del fallo es aplicado para detectar y aislar fallos en los limnimetros del sistema de alcantarillado de Barcelona.
Palabras clave: Detección de Fallos, Diagnosis de Fallos, Robusteza, Observadores, Intervalos, Sistemas de Eventos Discretos.
|
195 |
Crafting a Resilient andRobust Supply Chain : An Empirical Case study of Volvo CE & KapschRasmussen, Kristoffer January 2012 (has links)
Globalized markets, resources and means of communication reshaped the modern world. Today global supply chains are more agile and vulnerable to any crises. There is a need to have appropriate measures and strategies to dismantle and mitigate effects of any small or major disasters in the supply chains. Today supply chains are both directly and indirectly under influence of crises. The purpose is to study how companies assess and avoids risk in their supply chains and how they mitigate risks. Two cases have been used to explore the subject. Primary face-to-face interviews have been conducted with representatives from the two companies. Further secondary documentation from the companies has been analyzed. Managers are well aware of the risks embedded in their supply chains. They have processes to forecast and handle disruptions. Redundant resources are used in various extents to prepare for unforeseen events. To ensure uninterrupted supply different techniques such as dual-sourcing is used. Even though they see themselves as prepared companies are having problems mitigating the impact of uncertain events and these disruptions can have a big negative impact on their supply chains.
|
196 |
Robust Automotive Positioning: Integration of GPS and Relative Motion Sensors / Robust fordonspositionering: Integration av GPS och sensorer för relativ rörelseKronander, Jon January 2004 (has links)
Automotive positioning systems relying exclusively on the input from a GPS receiver, which is a line of sight sensor, tend to be sensitive to situations with limited sky visibility. Such situations include: urban environments with tall buildings; inside parking structures; underneath trees; in tunnels and under bridges. In these situations, the system has to rely on integration of relative motion sensors to estimate vehicle position. However, these sensor measurements are generally affected by errors such as offsets and scale factors, that will cause the resulting position accuracy to deteriorate rapidly once GPS input is lost. The approach in this thesis is to use a GPS receiver in combination with low cost sensor equipment to produce a robust positioning module. The module should be capable of handling situations where GPS input is corrupted or unavailable. The working principle is to calibrate the relative motion sensors when GPS is available to improve the accuracy during GPS intermission. To fuse the GPS information with the sensor outputs, different models have been proposed and evaluated on real data sets. These models tend to be nonlinear, and have therefore been processed in an Extended Kalman Filter structure. Experiments show that the proposed solutions can compensate for most of the errors associated with the relative motion sensors, and that the resulting positioning accuracy is improved accordingly.
|
197 |
Regularization Using a Parameterized Trust Region SubproblemGrodzevich, Oleg January 2004 (has links)
We present a new method for regularization of ill-conditioned problems that extends the traditional trust-region approach. Ill-conditioned problems arise, for example, in image restoration or mathematical processing of medical data, and involve matrices that are very ill-conditioned. The method makes use of the L-curve and L-curve maximum curvature criterion as a strategy recently proposed to find a good regularization parameter. We describe the method and show its application to an image restoration problem. We also provide a MATLAB code for the algorithm. Finally, a comparison to the CGLS approach is given and analyzed, and future research directions are proposed.
|
198 |
Computational Studies on the Evolution of MetabolismUllrich, Alexander 27 February 2012 (has links) (PDF)
Living organisms throughout evolution have developed desired properties, such as the ability
of maintaining functionality despite changes in the environment or their inner structure, the
formation of functional modules, from metabolic pathways to organs, and most essentially
the capacity to adapt and evolve in a process called natural selection. It can be observed in
the metabolic networks of modern organisms that many key pathways such as the citric acid
cycle, glycolysis, or the biosynthesis of most amino acids are common to all of them.
Understanding the evolutionary mechanisms behind this development of complex biological
systems is an intriguing and important task of current research in biology as well as artificial
life. Several competing hypotheses for the formation of metabolic pathways and the mecha-
nisms that shape metabolic networks have been discussed in the literature, each of which finds
support from comparative analysis of extant genomes. However, while being powerful tools
for the investigation of metabolic evolution, these traditional methods do not allow to look
back in evolution far enough to the time when metabolism had to emerge and evolve to the
form we can observe today. To this end, simulation studies have been introduced to discover
the principles of metabolic evolution and the sources for the emergence of metabolism prop-
erties. These approaches differ considerably in the realism and explicitness of the underlying
models. A difficult trade-off between realism and computational feasibility has to be made
and further modeling decisions on many scales have to be taken into account, requiring the
combination of knowledge from different fields such as chemistry, physics, biology and last
but not least also computer science.
In this thesis, a novel computational model for the in silico evolution of early metabolism
is introduced. It comprises all the components on different scales to resemble a situation of
evolving metabolic protocells in an RNA-world. Therefore, the model contains a minimal
RNA-based genetics and an evolving metabolism of catalytic ribozymes that manipulate a
rich underlying chemistry. To allow the metabolic organization to escape from the confines
of the chemical space set by the initial conditions of the simulation and in general an open-
ended evolution, an evolvable sequence-to-function map is used. At the heart of the metabolic
subsystem is a graph-based artificial chemistry equipped with a built-in thermodynamics. The
generation of the metabolic reaction network is realized as a rule-based stochastic simulation.
The necessary reaction rates are calculated from the chemical graphs of the reactants on
the fly. The selection procedure among the population of protocells is based on the optimal metabolic yield of the protocells, which is computed using flux balance analysis.
The introduced computational model allows for profound investigations of the evolution of
early metabolism and the underlying evolutionary mechanisms. One application in this thesis
is the study of the formation of metabolic pathways. Therefore, four established hypothe-
ses, namely the backwards evolution, forward evolution, patchwork evolution and the shell
hypothesis, are discussed within the realms of this in silico evolution study. The metabolic
pathways of the networks, evolved in various simulation runs, are determined and analyzed
in terms of their evolutionary direction. The simulation results suggest that the seemingly
mutually exclusive hypotheses may well be compatible when considering that different pro-
cesses dominate different phases in the evolution of a metabolic system. Further, it is found
that forward evolution shapes the metabolic network in the very early steps of evolution. In
later and more complex stages, enzyme recruitment supersedes forward evolution, keeping a
core set of pathways from the early phase. Backward evolution can only be observed under
conditions of steady environmental change. Additionally, evolutionary history of enzymes
and metabolites were studied on the network level as well as for single instances, showing a
great variety of evolutionary mechanisms at work.
The second major focus of the in silico evolutionary study is the emergence of complex system
properties, such as robustness and modularity. To this end several techniques to analyze the
metabolic systems were used. The measures for complex properties stem from the fields of
graph theory, steady state analysis and neutral network theory. Some are used in general
network analysis and others were developed specifically for the purpose introduced in this
work. To discover potential sources for the emergence of system properties, three different
evolutionary scenarios were tested and compared. The first two scenarios are the same as
for the first part of the investigation, one scenario of evolution under static conditions and
one incorporating a steady change in the set of ”food” molecules. A third scenario was
added that also simulates a static evolution but with an increased mutation rate and regular
events of horizontal gene transfer between protocells of the population. The comparison of all
three scenarios with real world metabolic networks shows a significant similarity in structure
and properties. Among the three scenarios, the two static evolutions yield the most robust
metabolic networks, however, the networks evolved under environmental change exhibit their
own strategy to a robustness more suited to their conditions. As expected from theory,
horizontal gene transfer and changes in the environment seem to produce higher degrees
of modularity in metabolism. Both scenarios develop rather different kinds of modularity,
while horizontal gene transfer provides for more isolated modules, the modules of the second
scenario are far more interconnected.
|
199 |
Optimum Designs for Model Discrimination and Estimation in Binary Response ModelsHsieh, Wei-shan 29 June 2005 (has links)
This paper is concerned with the problem of finding an experimental design for discrimination between two rival models and for model robustness that minimizing the maximum bias simultaneously in binary response experiments. The criterion for model discrimination is based on the $T$-optimality criterion proposed in Atkinson and Fedorov (1975), which maximizes the sum of squares of deviations between the two rival models while the criterion for model robustness is based on minimizing the maximum probability bias of the two rival models. In this paper we obtain the optimum designs satisfy the above two criteria for some commonly used rival models in binary response experiments such as the probit and logit models etc.
|
200 |
Adaptive Estimation And Hypothesis Testing MethodsDonmez, Ayca 01 March 2010 (has links) (PDF)
For statistical estimation of population parameters, Fisher&rsquo / s maximum likelihood estimators (MLEs) are commonly used. They are consistent, unbiased and efficient, at any rate for large n. In most situations, however, MLEs are elusive because of computational difficulties. To alleviate these difficulties, Tiku&rsquo / s modified maximum likelihood estimators (MMLEs) are used. They are explicit functions of sample observations and easy to compute. They are asymptotically equivalent to MLEs and, for small n, are equally efficient. Moreover, MLEs and MMLEs are numerically very close to one another. For calculating MLEs and MMLEs, the functional form of the underlying distribution has to be known. For machine data processing, however, such is not the case. Instead, what is reasonable to assume for machine data processing is that the underlying distribution is a member of a broad class of distributions. Huber assumed that the underlying distribution is long-tailed symmetric and developed the so called M-estimators. It is very desirable for an estimator to be robust and have bounded influence function. M-estimators, however, implicitly censor certain sample observations which most practitioners do not appreciate. Tiku and Surucu suggested a modification to Tiku&rsquo / s MMLEs. The new MMLEs are robust and have bounded influence functions. In fact, these new estimators are overall more efficient than M-estimators for long-tailed symmetric distributions. In this thesis, we have proposed a new modification to MMLEs. The resulting estimators are robust and have bounded influence functions. We have also shown that they can be used not only for long-tailed symmetric distributions but for skew distributions as well. We have used the proposed modification in the context of experimental design and linear regression. We have shown that the resulting estimators and the hypothesis testing procedures based on them are indeed superior to earlier such estimators and tests.
|
Page generated in 0.0606 seconds