• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 234
  • 45
  • 25
  • 25
  • 25
  • 25
  • 25
  • 25
  • 5
  • 1
  • Tagged with
  • 331
  • 331
  • 331
  • 61
  • 60
  • 46
  • 21
  • 21
  • 20
  • 20
  • 18
  • 16
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Higher order discrete-time models with applications to multi-rate control

Comeau, A. Raymond (André Raymond) January 1997 (has links)
This thesis examines the fundamental relationship between a continuous-time system and its discrete-time models. This involves a study of the conditions that the state space realization of a model must satisfy in order to be valid. While such a study has been performed for models whose order equals that of the continuous-time system, this thesis also includes "higher order discrete-time models", that is, models whose order is higher than the continuous-time system. A strict mathematical definition for models is presented based upon the convergence in a certain sense of the time responses of the continuous-time system and its model. Theorems are also presented which can be used to prove the validity of models, and shown are that many common discretization techniques, such as mapping models and hold equivalent models, are valid. Using these theorems some of these discretization techniques can be generalized. However, the aim of this thesis is not to prove the validity of common discretization techniques, but to understand the conditions which a model must satisfy in order to be valid. Common discretization techniques simply provide convenient examples for this understanding. / The definition of models is later expanded to consider discrete-time time-varying and multi-rate system. It is with multi-rate systems that the importance of higher order models becomes particularly apparent. Depending on the particular ratio of sampling rates between the plant input and output, some multi-rate systems must include inherently discrete-time operations, resulting in a higher order, for these systems to be considered valid. Also shown is that it is possible for a discrete-time periodically-time-varying system to model a time-invariant continuous-time system. / Finally, using the developed model concept, the practical problem of the multi-rate implementation of an analogue control system is considered. The method presented is an extension of the plant input mapping method which is the only method capable of guaranteeing the stability of the digital closed-loop system provided the sampling period is nonpathological. Simulation examples illustrate the effectiveness of the proposed methods, even for very slow sampling periods.
2

Multiscale geometric image processing

Romberg, Justin K. January 2004 (has links)
Since their introduction a little more than 10 years ago, wavelets have revolutionized image processing. Wavelet based algorithms define the state-of-the-art for applications including image coding (JPEG-2000), restoration, and segmentation. Despite their success, wavelets have significant shortcomings in their treatment of edges. Wavelets do not parsimoniously capture even the simplest geometrical structure in images, and wavelet based processing algorithms often produce images with ringing around the edges. As a first step towards accounting for this structure, we will show how to explicitly capture the geometric regularity of contours in cartoon images using the wedgelet representation and a multiscale geometry model. The wedgelet representation builds up an image out of simple piecewise constant functions with linear discontinuities. We will show how the geometry model, by putting a joint distribution on the orientations of the linear discontinuities, allows us to weigh several factors when choosing the wedgelet representation: the error between the representation and the original image, the parsimony of the representation, and whether the wedgelets in the representation form "natural" geometrical structures. We will analyze a simple wedgelet coder based on these principles, and show that it has optimal asymptotic performance for simple cartoon images. Next, we turn our attention to piecewise smooth images; images that are smooth away from a smooth contour. Using a representation composed of wavelets and wedgeprints (wedgelets projected into the wavelet domain), we develop a quadtree based prototype coder whose rate-distortion performance is asymptotically near-optimal. We use these ideas to implement a full-scale image coder that outperforms JPEG-2000 both in peak signal to noise ratio (by 1--1.5dB at low bitrates) and visually. Finally, we shift our focus to building a statistical image model directly in the wavelet domain. For applications other than compression, the approximate shift-invariance and directional selectivity of the slightly redundant complex wavelet transform make it particularly well-suited for modeling singularity structure. Around edges in images, complex wavelet coefficients behave very predictably, exhibiting dependencies that we will exploit using a hidden Markov tree model. We demonstrate the effectiveness of the complex wavelet model with several applications: image denoising, multiscale segmentation, and feature extraction.
3

Finite element reliability analysis of inelastic dynamic systems

Jagannath, Mukundagiri K. January 1996 (has links)
Due to the inherent uncertainties present in nature and given the imperfect state of our knowledge, it is impossible to guarantee the satisfactory performance of any system in an absolute sense. Therefore, an approach such as reliability based design, which offers a rational basis for taking into account in the design process the various sources of uncertainty and checking the computed probability of failure, is desirable. Structural reliability analysis also provides as a by-product various reliability sensitivity measures, which are very useful for rational decision making in structural design. In addition, the performance of large and complex structural systems can be predicted only through complicated numerical algorithms, such as the powerful finite element method. Hence, in order to evaluate the probability of failure of such systems for given limit-states or failure criteria, finite element analysis and reliability analysis must be linked together to produce the finite element reliability method. In this study, the link between a general purpose, research oriented, finite element program (FEAP) and a reliability analysis program (CALREL) is established. In order to realistically model the inelastic behavior of structural systems, several inelastic element routines are developed and implemented in FEAP. The algorithms required for computing accurately and efficiently the structural response gradient (needed in reliability analysis) with respect to basic material properties are also formulated and implemented in FEAP. Finally, finite element sensitivity and reliability analyses of several realistic structural examples are performed.
4

DUALITY PROPERTIES AND SEQUENTIAL GRADIENT-RESTORATION ALGORITHMS FOR OPTIMAL CONTROL PROBLEMS (NUMERICAL METHOD)

WANG, TONG January 1985 (has links)
This thesis considers duality properties and their application to the sequential gradient-restoration algorithms (SGRA) for optimal control problems. Two problems are studied: (P1) the basic problem and (P2) the general problem. In Problem (P1), the minimization of a functional is considered, subject to differential constraints and final constraints, the initial state being given; in Problem (P2), the minimization of a functional is considered, subject to differential constraints, nondifferential constraints, initial constraints, and final constraints. Depending on whether the primal formulation is used or the dual formulation is used, one obtains a primal sequential gradient-restoration algorithm (PSGRA) and a dual sequential gradient-restoration algorithm (DSGRA). With particular reference to Problem (P2), it is found convenient to split the control vector into an independent control vector and a dependent control vector, the latter having the same dimension as the nondifferential constraint vector. This modification enhances the computational efficiency of both the primal formulation and the dual formulation. The basic property of the dual formulation is that the Lagrange multipliers associated with the gradient phase and the restoration phase of SGRA minimize a special functional, quadratic in multipliers, subject to the multiplier differential equations and boundary conditions, for given state, control, and parameter. This duality property yields considerable computational benefits in that the auxiliary optimal control problems associated with the gradient phase and the restoration phase of SGRA can be reduced to mathematical programming problems involving a finite number of parameters as unknowns. Several numerical examples are solved using both the primal formulation and the dual formulation.
5

SYNTHESIS OF LINEAR MULTIVARIABLE FEEDBACK SYSTEMS IN INFINITE INDEX NORM

WANG, ZHENG-ZHI January 1985 (has links)
The deficiency of the widely used LQG method is that it depends heavily on the precision of plant parameters and the noise spectrum. The robustness problem can be formalized in singular value analysis. With the help of operator theory a new method for the synthesis of linear multivariable feedback systems in H(,(INFIN)) norm is developed from the singular value analysis. The positive feature of H(,(INFIN)) norm synthesis is the transparency for robustness conditions, the weighting functions are directly related to the specifications of design requirements. In this dissertation the LQG problem is restated as an interpolation problem in H(,2) space. The interpolation problem in simplest case can be solved by an explicit formula. The H(,(INFIN)) optimal norm can be obtained from the consideration of the ratio of two H(,2) norms. The close relations and similarities between H(,(INFIN)) and H(,2) are brought out. The total H(,(INFIN)) optimal solutions can be constructed by the unitary dilation from the interpolation space. The explicit formulas in s domain for these purposes are given, including the repeated zeros case and the degenerate case. The optimal solutions must belong to the degenerate case, in this case the problem can be solved by separating the singular part of Pick matrix from the regular part by a Cholesky decomposition. These results are also developed in a recursive version for repeated zeros. The zeros and interpolation condition vectors of a system can be determined numerically by an algorithm to solve eigenvalues and eigenvectors of a pencil. To convert the two-sided problem to a one-sided problem and to convert the nonsquare problem to a square problem are related to the spectral factorization which is discussed in detail. The optimal solutions of the nonsquare problem need not be all-pass, which is related to the existence of critical point. The theory applied to the sensitivity design problem can be considered as an extension of the classical lead-lag design method from SISO to MIMO with more profound mathematical background. The robust stability problem can also be formalized and solved in the framework. The robust sensitivity design introduces a new type of mathematical problem, which can be approximated in our framework in certain situations. The regulation, tracking, filtering and optimal controller design problem under the inexactly known noise spectrum can be solved in the general model in H(,(INFIN)) space by introducing proper weighting functions.
6

A universal hidden Markov tree image model

Romberg, Justin Keith January 1999 (has links)
Wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the need for computationally expensive iterative training. We propose two reduced-parameter HMT models that capture the general structure of a broad class of real-world images. In the image HMT model, we use the fact that for real-world images the structure of the HMT is self-similar across scale, allowing us to reduce the complexity of the model to just nine parameters. In the universal HMT we fix these nine parameters, eliminating training while retaining nearly all of the key structure modeled by the full HMT. Finally, we propose a fast shift-invariant HMT estimation algorithm that outperforms all other wavelet-based estimators in the current literature.
7

Formulation and dynamical analysis of quantized progressive second price auctions

Jia, Peng January 2011 (has links)
The fundamental motivation for the work in this thesis is the study of decentralized dynamical decision systems and their optimization properties. The design of competitive markets to substitute for traditional centralized regulation has been considered in many domains and a key feature of the decentralized decision mechanisms of competitive markets is that, subject to certain hypotheses, they maximize social welfare. Progressive auctions constitute a highly developed form of such market mechanisms and hence in this thesis we construct and analyze them as paradigm examples of decentralized dynamical decision making systems.In the work of Lazar and Semret (1999), a so-called Progressive Second Price auction mechanism (PSP) was proposed for both dynamic market-pricing and allocation of variable-size resources. In this thesis, three quantized versions of the PSP auction are developed. First, a so-called Quantized - PSP (Q-PSP) auction algorithm is analyzed where the agents have similar private demand functions and submit bids synchronously. It is shown that the nonlinear dynamics induced by this algorithm are such that the prices bid by the various agents and the quantities allocated to these agents converge in at most five iterations or oscillate indefinitely, with all prices converging to one price for all agents or to a limit cycle on just two prices for all agents. This behaviour is not only independent of the number of agents involved but is also independent of the number of quantization levels. Second, the Aggressive-Defensive Quantized - PSP (ADQ-PSP) auction algorithm is presented which improves upon the performance of the Q-PSP auction. For the ADQ-PSP auction applied to agent populations with randomly distributed demand functions, it is shown that the states of the corresponding dynamical systems rapidly converge with high probability to a quantized (Nash) equilibrium with a common price for all agents. Third, the Unique-limit Quantized - PSP (UQ-PSP) auction algorithm is developed as a modification of the ADQ-PSP; for this algorithm, (i) the limit price of all system trajectories is independent of the initial data, and (ii) modulo the quantization level, the limiting resource allocation is efficient (i.e., the corresponding social welfare function, or summed individual valuation functions, is optimal up to a quantized level).These quantized auction algorithms are, first, extended to supply auctions, that is to say, competitive markets where only sellers are assumed to exist, and then, second, extended to double-sided auctions where auctions are defined between both sellers and buyers separately and which interact in a well defined way. Finally, network based auctions are considered; this is motivated by the fact that agents in communication networks or social networks may not be able to access the bid information of all other agents or resource information over such networks and hence must make decisions based solely upon local information. In particular, a two-level network-based auction is developed and is formulated as a consensus UQ-PSP auction where suppliers in the upper network recursively follow consensus dynamics to allocate quantities which are the subject of UQ-PSP auctions at each network node. This configuration solves the corresponding discrete-time weighted-average consensus problem, converges to a unique network wide price and achieves social efficiency for the whole network. / La motivation première du travail accompli dans cette thèse est l'étude des systèmes dynamiques de décisions décentralisées et leurs propriétés d'optimisation. La conception des marchés compétitifs dans le but de remplacer la réglementation centralisée traditionnelle a été envisagée dans plusieurs domaines. L'élément clé des mécanismes de décisions décentralisés est que sous certaines hypothèses, ils maximisent le bien-être social. Les enchères progressives constituent une forme extrêmement développée de ces types de mécanismes de marchés. Cette thèse les construit et les analyse donc comme étant des exemples paradigmatiques de systèmes dynamiques de prise de décision décentralisés.Le travail de Lazar et Semret (1999), un mécanisme d'enchères au second prix progressif (PSP), ainsi nommé, a été envisagé pour la tarification en fonction des marchés ainsi que pour l'allocation des ressources de format variable. Dans cette thèse, trois versions quantifiées des enchères au second prix progressif ont été développées.Premièrement, un algorithme d'enchère PSP-quantifié ainsi nommé (Q-PSP) est analysé là où les agents ont des fonctions de demande privée similaires et font des offres simultanément. Il est démontré que la dynamique non linéaire causée par cet algorithme est tel que les offres faites par divers agents et les quantités allouées à ces agents convergent en cinq itérations au plus ou oscillent indéfiniment, tandis que tous les prix convergent vers un seul prix pour tous les agents ou vers un cycle limite de deux prix pour tous les agents. Ce comportement n'est pas seulement indépendant du nombre d'agents en jeu, mais est aussi indépendant du nombre de niveaux de quantification. Deuxièmement, l'algorithme de l'enchère PSP quantifié agressif-défensif (ADQ-PSP) est présenté, ce qui améliore la performance de l'enchère Q-PSP. Pour l'enchère ADQ-PSP mis en pratique dans les populations d'agents dont la fonction de demande est distribuée de façon aléatoire, il est démontré que les états des systèmes dynamiques correspondants convergent rapidement avec une forte probabilité d'atteindre un équilibre (de Nash) quantifié, avec un prix commun pour tous les agents. Troisièmement, l'algorithme d'enchère quantifié à limite unique-PSP (UQ-PSP) est développé en tant que modification de l'ADQ-PSP. Pour cet algorithme, (i) le prix limite de toutes les trajectoires de systèmes est indépendant des données initiales, et (ii) modulo le niveau de quantification, l'allocation restrictive est efficace, c'est-à-dire que la fonction correspondante du bien-être social, ou l'addition des fonctions d'évaluation individuelles, est optimisée à un certain niveau de quantification près.L'utilisation de ces algorithmes d'enchère quantifiées est, en premier lieu, étendue aux enchères inversées, c'est-à-dire, des marchés compétitifs où l'on présume qu'il existe seulement des vendeurs, et ensuite, aux double enchères, où les enchères sont définies séparément entre les vendeurs et les acheteurs, qui communiquent d'une façon bien définie. Finalement, les enchères sur réseau sont considérées. Cette analyse est motivée par le fait que les agents dans les réseaux de communication ou les réseaux sociaux ne peuvent pas accéder aux renseignements concernant les offres de tous les autres agents ou aux informations à propos des ressources à travers de tels réseaux et doivent donc fonder leurs décisions seulement sur des renseignements locaux. En particulier, une enchère basée sur un réseau à deux niveaux est mise au point et formulée comme consensus d'enchère UQ-PSP dans lequel les fournisseurs faisant partie du réseau supérieur suivent récursivement la dynamique de consensus afin d'allouer des quantités qui sont sujettes à des enchères UQ-PSP à chaque nœud du réseau. Cette configuration résout le problème du consensus de la moyenne pondérée à temps discret, converge vers un prix unique pour tout le réseau et atteint l'utilité sociale pour le réseau en entier.
8

Performance factors for fine end-point position control in robots

Wredenhagen, G. Finn (Gordon Finn) January 1994 (has links)
This thesis is concerned with the factors that affect robot performance in positioning control. Specifically, we focus on the problem of fine end-point motion control of the robot end-effector about a nominal point where the linearized dynamics can be used. Performance is measured in the context of linear quadratic (LQ) theory. / An LQ based task-space performance index for robots is proposed. Several existing robots are examined for various transient tasks using this index and for each an optimum operating location is found. A cheap control (i.e. large actuator energies) analysis is done. The limits to performance are determined (i.e. singular optimal control). An explicit solution to performance was determined and an examination of the computed-torque control law is done. / An LQ based piecewise linear control (PLC) law is derived that increases the LQ gain in a piecewise-constant manner as the system trajectory converges towards the origin. This law uses a succession of invariant sets of decreasing size and for each an associated LQ gain. The formulation gives rise to an iteration function whose solution is a fixed point. The development of the PLC law led to the unveiling of a number of key properties, namely that the solution to the algebraic Riccati equation is concave with respect to both the actuator weighting and the state weighting matrices. A time-varying extension of the PLC law and an overshoot control scheme are also derived. / Issues regarding state estimation problem are studied. Noise is introduced to account for model uncertainty. A transient and steady state Kalman filter analysis is done. Sensor issues are examined for robots. The Kalman filter is used to fuse joint sensor data, Cartesian position sensor data, and tachometer data to provide a single best estimate of the state and to eliminate position offsets due to model error. / Finally, the effects of unmodeled dynamics, model error, and non-linearities on performance are examined. A Kalman filter is used to eliminate bias positioning errors at the robot's end-effector. Performance-uncertainty curves are generated using a numerical convex optimization method when the system is subject to parametric uncertainty. Describing functions are used to examine the backlash non-linearity.
9

A Markov chain flow model with application to flood forecasting

Yapo, Patrice Ogou, 1967- January 1992 (has links)
This thesis presents a new approach to streamflow forecasting. The approach is based on specifying the probabilities that the next flow of a stream will occur within different ranges of values. Hence, this method is different from the time series models where point estimates are given as forecasts. With this approach flood forecasting is possible by focusing on a preselected range of streamflows. A double criteria objective function is developed to assess the model performance in flood prediction. Three case studies are examined based on data from the Salt River in Phoenix, Arizona and Bird Creek near Sperry, Oklahoma. The models presented are: a first order Markov chain (FOMC), a second order Markov chain (SOMC), and a first order Markov chain with rainfall as an exogenous input (FOMCX). Three forecasts methodologies are compared among each other and against time series models. It is shown that the SOMC is better than the FOMC while the FOMCX is better than the time series models.
10

Analysis of complex social systems by agent-based simulation

Zhao, Jijun January 2005 (has links)
This dissertation studied complex social systems that have large number of individuals and complicated functional relations among individuals. Prisoner's Dilemma (PD) including Social Dilemmas (SDs) is a type of problem arising from collective actions in social systems. Previous PD studies have limitations and are not suitable for the study of collective actions in complex social systems. The large number of individuals and the complexity of the models made the development of theoretical, analytical studies impossible. An agent-based computer simulation is used in this dissertation for investigating N-person Prisoner's Dilemma (NPD), and its new extensions. My research can be divided into three chapters (three appendixes in this dissertation). In the first problem, the classical NPD model is considered, a much faster algorithm was developed, and the long term behavior of Pavlovian agents is examined. In this study, the main feature of the classical PD model was kept by restricting the state space into two possibilities: cooperation and defection. In most social situations the state space is much more complicated. In the second study, NPD was introduced with continuous state space. A continuous variable described the cooperation level of the participating individuals. A stochastic differential equation models state change of individuals. Public media and personal influence were first introduced in the study of NPD. In the third model, we analyzed the dynamic process of fund raising for a public radio station. This model is a combination of the other two models; discrete in the sense that donating or not in a time period is discrete variable; however the amount the individuals can pledge to the station is a continuous variable. In all three models, individual personalities are considered and quantified. Major personality types that might affect the possible cooperation or defection of the agents were captured in the continuous NPD simulation; major motivations that might affect the probability of pledging at a certain time period and the pledged amount were captured in the fund raising case. During the computer simulation, the behavior of each agent and the behavior of the entire society can be monitored.

Page generated in 0.1572 seconds