• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 344
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 707
  • 183
  • 93
  • 88
  • 87
  • 76
  • 68
  • 54
  • 53
  • 53
  • 52
  • 51
  • 49
  • 42
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Control Design and Performance Analysis of force Reflective Teleoperators - A Passivity Based Approach

Flemmer, Henrik January 2004 (has links)
In this thesis, the problem of controlling a surgical masterand slave system with force reflection is studied. The problemof stiff contacts between the slave and the environment isgiven specific attention. The work has been carried out at KTHbased on an initial cooperation with Karolinska Sjukhuset. Theaim of the over all project is to study the possibilities forintroduction of a force reflective teleoperator in neurologicalskullbase operations for the particular task of bone millingand thereby, hopefully, increase patient safety, decreasesurgeon workload and cost forthe society. The main contributions of this thesis are: Derivation of a dynamical model of the master andoperator’s finger system and, experimental identificationof ranges on model parameter values. Based on this model, theinteraction channel controllers optimized for transparency arederived and modified to avoid the influence of the uncertainmodel parameters. This results in a three channel structure. Todecrease the influence of the uncertain parameters locally atthe master, a control loop is designed such that the frequencyresponse of the reflected force is relatively unaffected by theuncertainties, a result also confirmed in a transparencyanalysis based on the H-matrix. The developed teleoperatorcontrol structure is tested in experiments where the operatorcould alter the contact force without facing any problems aslong as the slave is in contact with the environment. As a result of the severe difficulties for the teleoperatorto move from free space motion to in-contact manipulationwithout oscillative behaviour, a new detection algorithm basedon passivity theory is developed. The algorithm is able todetect the non-passive behaviour of the actual teleoperatorinduced by the discrete change in system dynamics occurring atthe contact instant. A stabilization controller to be activatedby the detection algorithm is designed and implemented on themaster side of the teleoperator. The detection algorithm andthe stabilization controller are shown highly effective in realexperiments. All major research results presented in the thesis have beenverified experimentally. KeywordsTeleoperator, Force Feedback, Passivity, StiffContacts, Control, Robustness, Transparency, Bone Milling,Uncertainty
182

Optimal Stopping and Model Robustness in Mathematical Finance

Wanntorp, Henrik January 2008 (has links)
Optimal stopping and mathematical finance are intimately connected since the value of an American option is given as the solution to an optimal stopping problem. Such a problem can be viewed as a game in which we are trying to maximize an expected reward. The solution involves finding the best possible strategy, or equivalently, an optimal stopping time for the game. Moreover, the reward corresponding to this optimal time should be determined. It is also of interest to know how the solution depends on the model parameters. For example, when pricing and hedging an American option, the volatility needs to be estimated and it is of great practical importance to know how the price and hedging portfolio are affected by a possible misspecification. The first paper of this thesis investigates the performance of the delta hedging strategy for a class of American options with non-convex payoffs. It turns out that an option writer who overestimates the volatility will obtain a superhedge for the option when using the misspecified hedging portfolio. In the second paper we consider the valuation of a so-called stock loan when the lender is allowed to issue a margin call. We show that the price of such an instrument is equivalent to that of an American down-and-out barrier option with a rebate. The value of this option is determined explicitly together with the optimal repayment strategy of the stock loan. The third paper considers the problem of how to optimally stop a Brownian bridge. A finite horizon optimal stopping problem like this can rarely be solved explicitly. However, one expects the value function and the optimal stopping boundary to satisfy a time-dependent free boundary problem. By assuming a special form of the boundary, we are able to transform this problem into one which does not depend on time and solving this we obtain candidates for the value function and the boundary. Using stochastic calculus we then verify that these indeed satisfy our original problem. In the fourth paper we consider an investor wanting to take advantage of a mispricing in the market by purchasing a bull spread, which is liquidated in case of a market downturn. We show that this can be formulated as an optimal stopping problem which we then, using similar techniques as in the third paper, solve explicitly. In the fifth and final paper we study convexity preservation of option prices in a model with jumps. This is done by finding a sufficient condition for the no-crossing property to hold in a jump-diffusion setting.
183

An Empirical Analysis of Family Cost of Children : A Comparison of Ordinary Least Square Regression and Quantile Regression

Li, Yang January 2010 (has links)
Quantile regression have its advantage properties comparing to the OLS model regression which are full measurement of the effects of a covariate on response, robustness and Equivariance property. In this paper, I use a survey data in Belgium and apply a linear model to see the advantage properites of quantile regression. And I use a quantile regression model with the raw data to analyze the different cost of family on different numbers of children and apply a Wald test. The result shows that for most of the family types and living standard, from the lower quantile to the upper quantile the family cost on children increases along with the increasing number of children and the cost of each child is the same. And we found a common behavior that the cost of the second child is significantly more than the cost of the first child for a nonworking type of family and all living standard families, at the upper quantile (from 0.75 quantile to 0.9 quantile) of the conditional distribution.
184

On estimation in econometric systems in the presence of time-varying parameters

Brännäs, Kurt January 1980 (has links)
Economic systems are often subject to structural variability. For the achievement of correct structural specification in econometric modelling it is then important to allow for parameters that are time-varying, and to apply estimation techniques suitably designed for inference in such models. One realistic model assumption for such parameter variability is the Markovian model, and Kaiman filtering is then assumed to be a convenient estimator. In the thesis several aspects of using Kaiman filtering approaches to estimation in that framework are considered. The application of the Kaiman filter to estimation in econometric models is straightforward if a set of basic assumptions are satisfied, and if necessary initial specifications can be accurately made. Typically, however, these requirements can generally not be perfectly met. It is therefore of great importance to know the consequences of deviations from the basic assumptions and correct initial specifications for inference, in particular for the small sample situations typical in econometrics. If the consequences are severe it is essential to develop techniques to cope with such aspects.For estimation in interdependent systems a two stage Kaiman filter is proposed and evaluated, theoretically, as well as by a small sample Monte Carlo study, and empirically. The estimator is approximative, but with promising small sample properties. Only if the transition matrix of the parameter model and an initial parameter vector are misspecified, the performance deteriorates. Furthermore, the approach provides useful information about structural properties, and forms a basis for good short term forecasting.In a reduced form fraaework most of the basic assumptions of the traditional Kaiman filter are relaxed, and the implications are studied. The case of stochastic regressors is, under reasonable additional assumptions, shown to result in an estimator structurally similar to that due to the basic assumptions. The robustness properties are such that in particular the transition matrix and the initial parameter vector should be carefully estimated. An estimator for the joint estimation of the transition matrix, the parameter vector and the model residual variance is suggested and utilized to study the consequences of a misspecified parameter model. By estimating th transitions the parameter estimates are seen to be robust in this respect. / <p>Härtill 4 delar</p> / digitalisering@umu
185

Autonomic Core Network Management System

Tizghadam, Ali 11 December 2009 (has links)
This thesis presents an approach to the design and management of core networks where the packet transport is the main service and the backbone should be able to respond to unforeseen changes in network parameters in order to provide smooth and reliable service for the customers. Inspired by Darwin's seminal work describing the long-term processes in life, and with the help of graph theoretic metrics, in particular the "random-walk betweenness", we assign a survival value, the network criticality, to a communication network to quantify its robustness. We show that the random-walk betweenness of a node (link) consists of the product of two terms, a global measure which is fixed for all the nodes (links) and a local graph measure which is in fact the weight of the node (link). The network criticality is defined as the global part of the betweenness of a node (link). We show that the network criticality is a monotone decreasing, and strictly convex function of the weight matrix of the network graph. We argue that any communication network can be modeled as a topology that evolves based on survivability and performance requirements. The evolution should be in the direction of decreasing the network criticality, which in turn increases the network robustness. We use network criticality as the main control parameter and we propose a network management system, AutoNet, to guide the network evolution in real time. AutoNet consists of two autonomic loops, the slow loop to control the long-term evolution of robustness throughout the whole network, and the fast loop to account for short-term performance and robustness issues. We investigate the dynamics of network criticality and we develop a convex optimization problem to minimize the network criticality. We propose a network design procedure based on the optimization problem which can be used to develop the long-term autonomic loop for AutoNet. Furthermore, we use the properties of the duality gap of the optimization problem to develop traffic engineering methods to manage the transport of packets in a network. This provides for the short-term autonomic loop of AutoNet architecture. Network criticality can also be used to rank alternative networks based on their robustness to the unpredicted changes in network conditions. This can help find the best network structure under some pre-specified constraint to deal with robustness issues.
186

Real Robustness Radii and Performance Limitations of LTI Control Systems

Lam, Simon Sai-Ming 31 August 2011 (has links)
In the study of linear time-invariant systems, a number of definitions, such as controllability, observability, not having decentralized fixed modes, minimum phase, etc., have been made. These definitions are highly useful in obtaining existence results for solving various types of control problems, but a drawback to these definitions is that they are binary, which simply determines whether a system is, for instance, either controllable or uncontrollable. In practical situations, however, there are many uncertainties in a system’s parameters caused by linearization, modelling errors, discretizations, and other numerical approximations and/or errors. So knowing that a system is controllable can sometimes be misleading if the controllable system is actually "almost" uncontrollable as a result of such uncertainties. Since an "almost" uncontrollable system poses significant difficulty in designing a quality controller, a continuous measure of controllability, called a controllability radius, is more desirable to use and has been widely studied in the past. The main focus of this thesis is to extend the development behind the controllability radius, with an emphasis on real parametric perturbations, to other definitions, replacing the traditional binary 'yes/no' metrics with continuous measures. We study four topics related to this development. First, we generalize the concept of real perturbation values of a matrix to the cases of matrix pairs and matrix triplets. By doing so, we are able to deal with more general perturbation structures and subsequently study, in addition to standard LTI systems, other types of systems such as LTI descriptor and time-delay systems. Second, we introduce the real decentralized fixed mode (DFM) radius, the real transmission zero at s radius, and the real minimum phase radius, which respectively measure how "close" i) a decentralized LTI system is to having a DFM, ii) a centralized system is to having a transmission zero at a particular point s in the complex plane, and iii) a minimum phase system is to being a nonminimum phase system. These radii are defined in terms of real parametric perturbations, and computable formulas for these radii are derived using a characterization based on real perturbation values and the aforementioned generalizations. Third, we present two efficient algorithms to i) solve the general real perturbation value problem, and ii) evaluate the various real LTI robustness radii introduced in this thesis. Finally as the last topic, we study the ability of a LTI system to achieve high performance control, and characterize the difficulty of achieving high performance control using a new continuous measure called the Toughness Index. A number of examples involving the various measures are studied in this thesis.
187

Autonomic Core Network Management System

Tizghadam, Ali 11 December 2009 (has links)
This thesis presents an approach to the design and management of core networks where the packet transport is the main service and the backbone should be able to respond to unforeseen changes in network parameters in order to provide smooth and reliable service for the customers. Inspired by Darwin's seminal work describing the long-term processes in life, and with the help of graph theoretic metrics, in particular the "random-walk betweenness", we assign a survival value, the network criticality, to a communication network to quantify its robustness. We show that the random-walk betweenness of a node (link) consists of the product of two terms, a global measure which is fixed for all the nodes (links) and a local graph measure which is in fact the weight of the node (link). The network criticality is defined as the global part of the betweenness of a node (link). We show that the network criticality is a monotone decreasing, and strictly convex function of the weight matrix of the network graph. We argue that any communication network can be modeled as a topology that evolves based on survivability and performance requirements. The evolution should be in the direction of decreasing the network criticality, which in turn increases the network robustness. We use network criticality as the main control parameter and we propose a network management system, AutoNet, to guide the network evolution in real time. AutoNet consists of two autonomic loops, the slow loop to control the long-term evolution of robustness throughout the whole network, and the fast loop to account for short-term performance and robustness issues. We investigate the dynamics of network criticality and we develop a convex optimization problem to minimize the network criticality. We propose a network design procedure based on the optimization problem which can be used to develop the long-term autonomic loop for AutoNet. Furthermore, we use the properties of the duality gap of the optimization problem to develop traffic engineering methods to manage the transport of packets in a network. This provides for the short-term autonomic loop of AutoNet architecture. Network criticality can also be used to rank alternative networks based on their robustness to the unpredicted changes in network conditions. This can help find the best network structure under some pre-specified constraint to deal with robustness issues.
188

Real Robustness Radii and Performance Limitations of LTI Control Systems

Lam, Simon Sai-Ming 31 August 2011 (has links)
In the study of linear time-invariant systems, a number of definitions, such as controllability, observability, not having decentralized fixed modes, minimum phase, etc., have been made. These definitions are highly useful in obtaining existence results for solving various types of control problems, but a drawback to these definitions is that they are binary, which simply determines whether a system is, for instance, either controllable or uncontrollable. In practical situations, however, there are many uncertainties in a system’s parameters caused by linearization, modelling errors, discretizations, and other numerical approximations and/or errors. So knowing that a system is controllable can sometimes be misleading if the controllable system is actually "almost" uncontrollable as a result of such uncertainties. Since an "almost" uncontrollable system poses significant difficulty in designing a quality controller, a continuous measure of controllability, called a controllability radius, is more desirable to use and has been widely studied in the past. The main focus of this thesis is to extend the development behind the controllability radius, with an emphasis on real parametric perturbations, to other definitions, replacing the traditional binary 'yes/no' metrics with continuous measures. We study four topics related to this development. First, we generalize the concept of real perturbation values of a matrix to the cases of matrix pairs and matrix triplets. By doing so, we are able to deal with more general perturbation structures and subsequently study, in addition to standard LTI systems, other types of systems such as LTI descriptor and time-delay systems. Second, we introduce the real decentralized fixed mode (DFM) radius, the real transmission zero at s radius, and the real minimum phase radius, which respectively measure how "close" i) a decentralized LTI system is to having a DFM, ii) a centralized system is to having a transmission zero at a particular point s in the complex plane, and iii) a minimum phase system is to being a nonminimum phase system. These radii are defined in terms of real parametric perturbations, and computable formulas for these radii are derived using a characterization based on real perturbation values and the aforementioned generalizations. Third, we present two efficient algorithms to i) solve the general real perturbation value problem, and ii) evaluate the various real LTI robustness radii introduced in this thesis. Finally as the last topic, we study the ability of a LTI system to achieve high performance control, and characterize the difficulty of achieving high performance control using a new continuous measure called the Toughness Index. A number of examples involving the various measures are studied in this thesis.
189

On Two Combinatorial Optimization Problems in Graphs: Grid Domination and Robustness

Fata, Elaheh 26 August 2013 (has links)
In this thesis, we study two problems in combinatorial optimization, the dominating set problem and the robustness problem. In the first half of the thesis, we focus on the dominating set problem in grid graphs and present a distributed algorithm for finding near optimal dominating sets on grids. The dominating set problem is a well-studied mathematical problem in which the goal is to find a minimum size subset of vertices of a graph such that all vertices that are not in that set have a neighbor inside that set. We first provide a simpler proof for an existing centralized algorithm that constructs dominating sets on grids so that the size of the provided dominating set is upper-bounded by the ceiling of (m+2)(n+2)/5 for m by n grids and its difference from the optimal domination number of the grid is upper-bounded by five. We then design a distributed grid domination algorithm to locate mobile agents on a grid such that they constitute a dominating set for it. The basis for this algorithm is the centralized grid domination algorithm. We also generalize the centralized and distributed algorithms for the k-distance dominating set problem, where all grid vertices are within distance k of the vertices in the dominating set. In the second half of the thesis, we study the computational complexity of checking a graph property known as robustness. This property plays a key role in diffusion of information in networks. A graph G=(V,E) is r-robust if for all pairs of nonempty and disjoint subsets of its vertices A,B, at least one of the subsets has a vertex that has at least r neighbors outside its containing set. In the robustness problem, the goal is to find the largest value of r such that a graph G is r-robust. We show that this problem is coNP-complete. En route to showing this, we define some new problems, including the decision version of the robustness problem and its relaxed version in which B=V \ A. We show these two problems are coNP-hard by showing that their complement problems are NP-hard.
190

The Consequences of stochastic gene expression in the nematode Caenorhabditis elegans

Burga Ramos, Alejandro Raúl, 1985- 20 July 2012 (has links)
Genetically identical cells and organisms growing in homogenous environmental conditions can show significant phenotypic variation. Furthermore, mutations often have consequences that vary among individuals (incomplete penetrance). Biochemical processes such as those involved in gene expression are subjected to fluctuations due to their inherent probabilistic nature. However, it is not clear how these fluctuations affect multicellular organisms carrying mutations and if stochastic variation in gene expression among individuals could confer any advantage to populations. We have investigated the consequences of stochastic gene expression using the nematode Caenorhabditis elegans as a model. Here we show that inter-individual stochastic variation in the induction of both specific and more general buffering systems combine to determine the outcome of inherited mutations in each individual. Also, we demonstrate that genetic and environmental robustness are coupled in C. elegans. Individuals with higher induction of stress response are more robust to the effect of mutations, however they incur a fitness cost, thus suggesting that variation at the population level could be beneficial in unpredictable environments. / Células y organismos genéticamente idénticos y creciendo en un ambiente homogéneo pueden mostrar diferencias en sus fenotipos. Además, una misma mutación puede afectar de un modo distinto a individuos de una misma población. Es sabido que los procesos bioquímicos responsables de la expresión de genes están sujetos a fluctuaciones debido a su inherentemente naturaleza probabilística. Sin embargo, el rol que juegan estas fluctuaciones en individuos portadores de mutaciones ha sido poco estudiado, así cómo si la expresión estocástica de genes puede conferir alguna ventaja al nivel poblacional. Para investigar las consecuencias de la expresión estocástica de genes usamos como modelo al nemátodo Caenorhabditis elegans. En este trabajo demostramos que existe variación entre individuos en la inducción de mecanismos (tanto gen-específicos como globales) que confieren robustez al desarrollo. En consecuencia, diferencias fenotípicas entre mutantes están determinadas por su variación. También, demostramos que la robustez a perturbaciones genéticos y ambientales están estrechamente ligadas en C. elegans. Individuos que inducen estocásticamente una mayor respuesta a stress, están fenotípicamente mejor protegidos al efecto de mutaciones pero incurren en un costo reproductivo importante. Eso sugiere, que variaciones estocásticas al nivel poblacional pueden ser benéficas cuando las poblaciones afrontan ambientes impredecibles.

Page generated in 0.057 seconds