• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 7
  • 7
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Constraints and QoS Management of Personal Process

Pin, Kao 12 August 2004 (has links)
This thesis addresses the correctness requirements of a formal model. This model is called the personal process model. A personal process is a coordination of personal activities, each requiring a joint effort between a user and an enacting organization. We identify data and temporal dependencies as the key elements for personal process coordination. We define the correctness on personal process types and instances. We also identify three key QoS measures on personal process instances, namely the response time, the cost and the reliability. A personal process is managed by a personal workflow management system (PWFMS) running on a handheld device. Considering the fact that handheld devices usually impose strict limitations on their computation power and battery consumptions, we propose efficient algorithms for verifying the correctness and analyzing the QoSs of a personal process at run-time.
2

Maintaining Global Consistency in Advanced Database Systems

John Gilmore Unknown Date (has links)
The thesis examines issues of consistency maintenance in advanced database systems; primarily, multidatabase systems. A multidatabase system consists of a number of pre-existing local database systems. A local database system is unaware of its participation in the multidatabase system and, likewise, the multidatabase system has no knowledge of local transaction executions. Enforcing global constraints in such an environment is clearly a challenging task. A methodology for constraint enforcement is presented which utilises existing technology for the replication of data in an attempt to enforce global consistency. While it is shown to have limited applicability, it is nonetheless an interesting study and serves to qualify the limits of such a solution. An alternative method for global consistency maintenance, which relies on the existence of triggers at each of the participant local databases, is then discussed. This method is shown to be particularly suitable when the issue of local database autonomy is of concern. It is, however, only suited to systems where each of the local databases provides a capability for triggering external actions based on the occurrence of particular database events. As a result of this methodology, the requirement for identification of enforcement actions which access sites where the instigating transaction originated becomes apparent. Such enforcement actions can cause deadlock in certain circumstances when they are executed at the same site which initially triggered the global constraint. This issue is dealt with in a novel way by proposing a methodology for statically checking relations at each participant site with a view to determining their susceptibility to this form of deadlock. The method, a graphical representation of the constraint enforcement process in a distributed system, is also shown to have other desirable properties. Arising from the requirements of other work within the thesis, an algorithm for detecting all cycles in a given directed graph is presented. It is shown that, while the well-known adaptation of the Depth First Search algorithm to cycle detection in directed graphs can detect the existence of cycles, it cannot in all circumstances identify all cycles. An algorithm which performs this task is presented together with an analysis of its complexity and correctness. In a more general sense, the issue of deferred constraint enforcement is discussed. Several scenarios where deferred enforcement of constraints is required are presented, together with a method for detecting the presence of cyclic dependencies within a given database schema.
3

Thrust Vector Control of Multi-Body Systems Subject to Constraints

Nguyen, Tâm Willy 11 December 2018 (has links) (PDF)
This dissertation focuses on the constrained control of multi-body systems which are actuated by vectorized thrusters. A general control framework is proposed to stabilize the task configuration while ensuring constraints satisfaction at all times. For this purpose, the equations of motion of the system are derived using the Euler-Lagrange method. It is seen that under some reasonable conditions, the system dynamics are decoupled. This property is exploited in a cascade control scheme to stabilize the points of equilibrium of the system. The control scheme is composed of an inner loop, tasked to control the attitude of the vectorized thrusters, and an outer loop which is tasked to stabilize the task configuration of the system to a desired configuration. To prove stability, input-to-state stability and small gain arguments are used. All stability properties are derived in the absence of constraints, and are shown to be local. The main result of this analysis is that the proposed control scheme can be directly applied under the assumption that a suitable mapping between the generalized force and the real inputs of the system is designed. This thesis proposes to enforce constraints by augmenting the control scheme with two types of Reference Governor units: the Scalar Reference Governor, and the Explicit Reference Governor. This dissertation presents two case studies which inspired the main generalization of this thesis: (i) the control of an unmanned aerial and ground vehicle manipulating an object, and (ii) the control of a tethered quadrotor. Two further case studies are discussed afterwards to show that the generalized control framework can be directly applied when a suitable mapping is designed. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
4

Πρόβλημα αναγνώρισης της αναδίπλωσης μιας πρωτεΐνης : μία πρόταση επίλυσης σε λογικό προγραμματισμό με διαχείριση περιορισμών

Διαμαντόπουλος, Νικόλαος 05 February 2015 (has links)
Η κατανόηση των μοριακών μηχανισμών της ζωής απαιτεί την αποκωδικοποίηση των λειτουργιών που εκτελούν οι πρωτεΐνες σε έναν οργανισμό. Δεκάδες χιλιάδες πρωτεΐνες έχουν μελετηθεί τα τελευταία χρόνια σε στόχο την εύρεση της τρισδιάστατης δομής τους που στην ουσία καθορίζει και την λειτουργία τους. Παρ’ όλα αυτά οι πειραματικές μέθοδοι που χρησιμοποιούνται παρότι είναι ακριβείς μέθοδοι είναι ακόμα ιδιαιτέρως δύσκολες και απαιτητικές σε χρόνο και οικονομικό κόστος. Για το λόγο αυτό επιχειρείται, με υπολογιστικές μεθόδους, να μειωθεί το κόστος της πρόβλεψης της 3D δομής μιας πρωτεΐνης όταν η γραμμική ακολουθία των αμινοξέων που την αποτελούν είναι γνωστή. Η παρούσα διπλωματική εργασία προσεγγίζει το πρόβλημα αυτό ως πρόβλημα βελτιστοποίησης για τη λύση του οποίου εφαρμόζεται μία δηλωτική προσέγγιση σε Λογικό Προγραμματισμό με Διαχείριση Περιορισμών. Η πρόταση βασίζεται σε ένα προσωποκεντρικό κυβικό πλέγμα ως τοπολογικό μοντέλο για την χωροθέτηση της πρωτεΐνης. Χρησιμοποιούνται επίσης πληροφορίες που αφορούν τυχόν δευτερεύουσες δομές που υπάρχουν στην πρωτεΐνη καθώς και άλλα heuristics για να περιοριστεί σημαντικά ο χώρος αναζήτησης. Τα πρώτα αποτελέσματα σε πραγματικές πρωτεΐνες είναι ενθαρρυντικά τόσο όσο αφορά την ακρίβεια και την χρονική επίδοση αλλά και όσον αφορά την κλιμάκωση του προβλήματος. / Understanding the molecular mechanisms of life requires decoding functions performed by proteins in an organism. Tens of thousands of proteins have been studied in recent years in order to recruit the three-dimensional structure, which essentially determines their function. Nevertheless, the test methods used although accurate methods are still very difficult and demanding time and financial cost. For this reason computational methods are used to reduce the cost of the provision of 3D structure of a protein when its linear sequence of amino acids is known. This thesis approaches the problem as an optimization problem for the solution of which is applied a declarative approach to Logic Programming with Constraint Management. The proposal is based on a face centered cubic lattice as a topological model for the location of the protein. Also used information concerning possible secondary structures present in the protein and other heuristics to significantly reduce the search space. First results on real proteins are promising both in terms of accuracy and time performance, but also regarding the escalation of the problem.
5

Computational workflow management for conceptual design of complex systems : an air-vehicle design perspective

Balachandran, Libish Kalathil January 2007 (has links)
The decisions taken during the aircraft conceptual design stage are of paramount importance since these commit up to eighty percent of the product life cycle costs. Thus in order to obtain a sound baseline which can then be passed on to the subsequent design phases, various studies ought to be carried out during this stage. These include trade-off analysis and multidisciplinary optimisation performed on computational processes assembled from hundreds of relatively simple mathematical models describing the underlying physics and other relevant characteristics of the aircraft. However, the growing complexity of aircraft design in recent years has prompted engineers to substitute the conventional algebraic equations with compiled software programs (referred to as models in this thesis) which still retain the mathematical models, but allow for a controlled expansion and manipulation of the computational system. This tendency has posed the research question of how to dynamically assemble and solve a system of non-linear models. In this context, the objective of the present research has been to develop methods which significantly increase the flexibility and efficiency with which the designer is able to operate on large scale computational multidisciplinary systems at the conceptual design stage. In order to achieve this objective a novel computational process modelling method has been developed for generating computational plans for a system of non-linear models. The computational process modelling was subdivided into variable flow modelling, decomposition and sequencing. A novel method named Incidence Matrix Method (IMM) was developed for variable flow modelling, which is the process of identifying the data flow between the models based on a given set of input variables. This method has the advantage of rapidly producing feasible variable flow models, for a system of models with multiple outputs. In addition, criteria were derived for choosing the optimal variable flow model which would lead to faster convergence of the system. Cont/d.
6

Computational workflow management for conceptual design of complex systems: an air-vehicle design perspective

Balachandran, Libish Kalathil January 2007 (has links)
The decisions taken during the aircraft conceptual design stage are of paramount importance since these commit up to eighty percent of the product life cycle costs. Thus in order to obtain a sound baseline which can then be passed on to the subsequent design phases, various studies ought to be carried out during this stage. These include trade-off analysis and multidisciplinary optimisation performed on computational processes assembled from hundreds of relatively simple mathematical models describing the underlying physics and other relevant characteristics of the aircraft. However, the growing complexity of aircraft design in recent years has prompted engineers to substitute the conventional algebraic equations with compiled software programs (referred to as models in this thesis) which still retain the mathematical models, but allow for a controlled expansion and manipulation of the computational system. This tendency has posed the research question of how to dynamically assemble and solve a system of non-linear models. In this context, the objective of the present research has been to develop methods which significantly increase the flexibility and efficiency with which the designer is able to operate on large scale computational multidisciplinary systems at the conceptual design stage. In order to achieve this objective a novel computational process modelling method has been developed for generating computational plans for a system of non-linear models. The computational process modelling was subdivided into variable flow modelling, decomposition and sequencing. A novel method named Incidence Matrix Method (IMM) was developed for variable flow modelling, which is the process of identifying the data flow between the models based on a given set of input variables. This method has the advantage of rapidly producing feasible variable flow models, for a system of models with multiple outputs. In addition, criteria were derived for choosing the optimal variable flow model which would lead to faster convergence of the system. Cont/d.
7

On the Consistency, Characterization, Adaptability and Integrity of Database Replication Systems

Ruiz Fuertes, María Idoia 30 September 2011 (has links)
Desde la aparición de las primeras bases de datos distribuidas hasta los actuales sistemas de replicación modernos, la comunidad de investigación ha propuesto múltiples protocolos para administrar la distribución y replicación de datos, junto con algoritmos de control de concurrencia para manejar las transacciones en ejecución en todos los nodos del sistema. Muchos protocolos están disponibles, por tanto, cada uno con diferentes características y rendimiento, y garantizando diferentes niveles de coherencia. Para saber qué protocolo de replicación es el más adecuado, dos aspectos deben ser considerados: el nivel necesario de coherencia y aislamiento (es decir, el criterio de corrección), y las propiedades del sistema (es decir, el escenario), que determinará el rendimiento alcanzable. Con relación a los criterios de corrección, la serialización de una copia es ampliamente aceptada como el más alto nivel de corrección. Sin embargo, su definición permite diferentes interpretaciones en cuanto a la coherencia de réplicas. En esta tesis se establece una correspondencia entre los modelos de coherencia de memoria, tal como se definen en el ámbito de la memoria compartida distribuida, y los posibles niveles de coherencia de réplicas, definiendo así nuevos criterios de corrección que corresponden a las diferentes interpretaciones identificadas sobre la serialización de una copia. Una vez seleccionado el criterio de corrección, el rendimiento alcanzable por un sistema depende en gran medida del escenario, es decir, de la suma del entorno del sistema y de las aplicaciones que se ejecutan en él. Para que el administrador pueda seleccionar un protocolo de replicación apropiado, los protocolos disponibles deben conocerse plena y profundamente. Una buena descripción de cada candidato es fundamental, pero un marco común es imperativo para comparar las diferentes opciones y estimar su rendimiento en un escenario dado. Los resultados presentados en esta tesis cumplen los objetivos establecidos y constituyen una contribución al estado del arte de la replicación de bases de datos en el momento en que se iniciaron los trabajos respectivos. Estos resultados son relevantes, además, porque abren la puerta a posibles contribuciones futuras. / Ruiz Fuertes, MI. (2011). On the Consistency, Characterization, Adaptability and Integrity of Database Replication Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11800 / Palancia

Page generated in 0.1183 seconds