• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 15
  • 2
  • Tagged with
  • 83
  • 9
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Reachability problems for systems with linear dynamics

Chen, Shang January 2016 (has links)
This thesis deals with reachability and freeness problems for systems with linear dynamics, including hybrid systems and matrix semigroups. Hybrid systems are a type of dynamical system that exhibit both continuous and discrete dynamic behaviour. Thus they are particularly useful in modelling practical real world systems which can both flow (continuous behaviour) and jump (discrete behaviour). Decision questions for matrix semigroups have attracted a great deal of attention in both the Mathematics and Theoretical Computer Science communities. They can also be used to model applications with only discrete components. For a computational model, the reachability problem asks whether we can reach a target point starting from an initial point, which is a natural question both in theoretical study and for real-world applications. By studying this problem and its variations, we shall prove in a formal mathematical sense that many problems are intractable or even unsolvable. Thus we know when such a problem appears in other areas like Biology, Physics or Chemistry, either the problem itself needs to be simplified, or it should by studied by approximation. In this thesis we concentrate on a specific hybrid system model, called an HPCD, and its variations. The objective of studying this model is twofold: to obtain the most expressive system for which reachability is algorithmically solvable and to explore the simplest system for which it is impossible to solve. For the solvable sub-cases, we shall also study whether reachability is in some sense easy or hard by determining which complexity classes the problem belongs to, such as P, NP(-hard) and PSPACE(-hard). Some undecidable results for matrix semigroups are also shown, which both strengthen our knowledge of the structure of matrix semigroups, and lead to some undecidability results for other models.
42

Resilience of an embedded architecture using hardware redundancy

Castano, Victor January 2014 (has links)
In the last decade the dominance of the general computing systems market has being replaced by embedded systems with billions of units manufactured every year. Embedded systems appear in contexts where continuous operation is of utmost importance and failure can be profound. Nowadays, radiation poses a serious threat to the reliable operation of safety-critical systems. Fault avoidance techniques, such as radiation hardening, have been commonly used in space applications. However, these components are expensive, lag behind commercial components with regards to performance and do not provide 100% fault elimination. Without fault tolerant mechanisms, many of these faults can become errors at the application or system level, which in turn, can result in catastrophic failures. In this work we study the concepts of fault tolerance and dependability and extend these concepts providing our own definition of resilience. We analyse the physics of radiation-induced faults, the damage mechanisms of particles and the process that leads to computing failures. We provide extensive taxonomies of 1) existing fault tolerant techniques and of 2) the effects of radiation in state-of-the-art electronics, analysing and comparing their characteristics. We propose a detailed model of faults and provide a classification of the different types of faults at various levels. We introduce an algorithm of fault tolerance and define the system states and actions necessary to implement it. We introduce novel hardware and system software techniques that provide a more efficient combination of reliability, performance and power consumption than existing techniques. We propose a new element of the system called syndrome that is the core of a resilient architecture whose software and hardware can adapt to reliable and unreliable environments. We implement a software simulator and disassembler and introduce a testing framework in combination with ERA’s assembler and commercial hardware simulators.
43

Modèles Markoviens Contextuels / Contextual Markovian Models

Radenen, Mathieu 30 September 2014 (has links)
La modélisation de données séquentielles est utile à de nombreux domaines : reconnaissance de parole, de gestes, d'écriture, ou encore la synthèse d'animations pour des avatars virtuels. Notre modélisation part du constat qu'une part importante de la variabilité entre les séquences d'observations peut être la conséquence de quelques variables contextuellesfixes le long de la séquence ou qui varient en fonction du temps. Une phrase peut être exprimée différemment en fonction de l'humeur du locuteur, un geste peut être plus ample en fonction de la taille de l'acteur etc... Ce type de variabilité ne peut pas toujours être supprimée par des pré-traitements.Dans un premier temps, nous proposons les modèles Markoviens Contextuels (CHMM), afin de modéliser directement l'influence du contexte sur les séquences d'observation en paramétrisant les distributions de probabilités des HMMs par des variables contextuelles statiques ou dynamiques.Puis, nous décrivons une approche afin d'exploiter efficacement l'information contextuelle dans un modèle discriminant, les Champs de Markov Conditionnels et Contextuels (CHCRF).Nous testons plusieurs variantes des CHMMs et investiguons dans quelle mesure cette modélisation est pertinente pour la classification de caractères manuscrits, la reconnaissance de parole ou pour la synthèse de mouvements de sourcils à partir de la parole pour un avatar virtuel.Enfin, afin d'apprendre à partir de moins d'exemples, nous proposons une approche de type Transfert utilisant les HMMs Contextuels. Cette méthode réalise du partage d'information entre les classes la ou les approches génératives apprennent des modèles de classes indépendants. / Modeling time series has practical applications in many domains : speech, gesture and handwriting recognition, synthesis of realistic character animations etc...The starting point of our modeling is that an important part of the variability between observation sequences may be the consequence of a few contextual variables that remain fixed all along a sequence or that vary slowly with time. For instance a sentence may be uttered quite differently according to the speaker emotion, a gesture may have more amplitude depending on the height of the performer etc... Such a variability cannot always be removed through preprocessing.We first propose the generative framework of Contextual Hidden Markov Models (CHMM) to model directly the influence of contextual information on observation sequences by parameterizing the probability distributions of HMMs with static or dynamic contextual variables. We test various instances of this framework on classification of handwritten characters, speech recognition and synthesis of eyebrow motion from speech for a virtual avatar.For each of these tasks, we investigate in what extent such modeling can translate into performance gains. We then introduce a natural and efficient way to exploit contextual information into Contextual Hidden Conditional Random Fields (CHCRF), the discriminative counter part of CHMMs.CHCRF may be viewed as an efficient way to learn a HCRF that exploit contextual information.Finally, we propose a Transfer Learning approach to learn Contextual HMMs from fewer examples. This method relies on sharing information between classes where in generative models classes are normally considered independent.
44

Flexible service choreography

Barker, Adam January 2007 (has links)
Service-oriented architectures are a popular architectural paradigm for building software applications from a number of loosely coupled, distributed services. Through a set of procedural rules, workflow technologies define how groups of services coordinate with one another to achieve a shared task. A problem with workflow specifications is that often the patterns of interaction between the distributed services are too complicated to predict and analyse at design-time. In certain cases, the exact patterns of message exchange and the concrete services to call cannot be predicted in advance, due to factors such as fluctuating network load or the availability of services. It is a more realistic assumption to endow software components with the ability to make decisions about the nature and scope of their interactions at runtime. Multiagent systems offer a complementary paradigm: building software applications from a number of self interested, autonomous agents. This thesis presents an investigation into fusing the agency and service-oriented architecture paradigms, in order to facilitate flexible, workflow composition. Our approach offers an agent-based solution to service choreography and is founded on the concept of shared interaction protocols. By adopting an agent-based approach to service choreography, active autonomous agents can utilise the typically passive service-oriented architectures, found in Internet and Grid systems. In contrast with statically defined, centralised service orchestrations, decentralised agents can perform service choreography at runtime, allowing them to operate in scenarios where it is not possible to define the pattern of interaction in advance. Application to real scenarios is a driving factor behind this research. By working closely with a number of active Grid projects, namely AstroGrid and the Large-Synoptic Survey Telescope (LSST), a concrete set of requirements for scientific workflow have been derived, based on realistic science problems. This research has resulted in the MultiAgent Service Choreography (MASC) language to express scientific workflow, methodology for system building and a software framework which performs agent based Web service choreography, in order to enact distributed e-Science experiments. Evaluation of this thesis is conducted through case study, applying the language, methodology and software framework to solve a motivating set of workflow scenarios.
45

On some queueing systems with server vacations, extended vacations, breakdowns, delayed repairs and stand-bys

Khalaf, Rehab F. January 2012 (has links)
This research investigates a batch arrival queueing system with a Bernoulli scheduled vacation and random system breakdowns. It is assumed that the repair process does not start immediately after the breakdown. Consequently there maybe a delay in starting repairs. After every service completion the server may go on an optional vacation. When the original vacation is completed the server has the option to go on an extended vacation. It is assumed that the system is equipped with a stand-by server to serve the customers during the vacation period of the main server as well as during the repair process. The service times, vacation times, repair times, delay times and extended vacation times are assumed to follow different general distributions while the breakdown times and the service times of the stand-by server follow an exponential distribution. By introducing a supplementary variable we are able to obtain steady state results in an explicit closed form in terms of the probability generating functions. Some important performance measures including; the average length of the queue, the average number of customers in the system, the mean response time, and the value of the traffic intensity are presented. The professional MathCad 2001 software has been used to illustrate the numerical results in this study.
46

A conceptual system design and managerial complexity competency model

Amaechi, Austin Oguejiofor January 2013 (has links)
Complex adaptive systems are usually difficult to design and control. There are several particular methods for coping with complexity, but there is no general approach to build complex adaptive systems. The challenges of designing complex adaptive systems in a highly dynamic world drive the need for anticipatory capacity within engineering organizations, with a goal of enabling the design of systems that can cope with an unpredictable environment. This thesis explores this question of enhancing anticipatory capacity through the study of a complex adaptive system design methodology and complexity management competencies. A general introduction to challenges and issues in complex adaptive systems design is given, since a good understanding of the industrial context is considered necessary in order to avoid oversimplification of the problem, neglecting certain important factors and being unaware of important influences and relationships. In addition, a general introduction to complex thinking is given, since designing complex adaptive systems requires a non-classical thought, while practical notions of complexity theory and design are put forward. Building on these, the research proposes a Complex Systems Life-Cycle Understanding and Design (CXLUD) methodology to aid system architects and engineers in the design and control of complex adaptive systems. Starting from a creative anticipation construct - a loosening mechanism to allow for more options to be considered, the methodology proposes a conceptual framework and a series of stages to follow to find proper mechanisms that will promote elements to desired solutions by actively interacting among themselves. To illustrate the methodology, a financial systemic risks infrastructure systems architecture development case study is presented. The final part of this thesis develops a conceptual model to analyse managerial complexity competency model from a qualitative phenomenological study perspective. The model developed in this research is called Understanding-Perception-Action (UPA) managerial complexity competency model. The results of this competency model can be used to help ease project manager’s transition into complex adaptive projects, as well as serve as a foundation to launch qualitative and quantitative research into this area of project complexity management.
47

Abstraction discovery and refinement for model checking by symbolic trajectory evaluation

Adams, Sara Elisabeth January 2014 (has links)
This dissertation documents two contributions to automating the formal verification of hardware – particularly memory-intensive circuits – by Symbolic Trajectory Evaluation (STE), a model checking technique based on symbolic simulation over abstract sets of states. The contributions focus on improvements to the use of BDD-based STE, which uses binary decision diagrams internally. We introduce a solution to one of the major hurdles in using STE: finding suitable abstractions. Our work has produced the first known algorithm that addresses this problem by automatically discovering good, non-trivial abstractions. These abstractions are computed from the specification, and essentially encode partial input combinations sufficient for determining the specification’s output value. They can then be used to verify whether the hardware model meets its specification using a technique based on and significantly extending previous work by Melham and Jones [2]. Moreover, we prove that our algorithm delivers correct results by construction. We demonstrate that the abstractions received by our algorithm can greatly reduce verification costs with three example hardware designs, typical of the kind of problems faced by the semiconductor design industry. We further propose a refinement method for abstraction schemes when over- abstraction occurs, i.e., when the abstraction hides too much information of the original design to determine whether it meets its specification. The refinement algorithm we present is based on previous work by Chockler et al. [3], which selects refinement candidates by approximating which abstracted input is likely the biggest cause of the abstraction being unsuitable. We extend this work substantially, concentrating on three aspects. First, we suggest how the approach can also work for much more general abstraction schemes. This enables refining any abstraction allowed in STE, rather than just a subset. Second, Chockler et al. describe how to refine an abstraction once a refinement candidate has been identified. We present three additional variants of refining the abstraction. Third, the refinement at its core depends on evaluating circuit logic gates. The previous work offered solutions for NOT- and AND-gates. We propose a general approach to evaluating arbitrary logic gates, which improves the selection process of refinement candidates. We show the effectiveness of our work by automatically refining an abstraction for a content-addressable memory that exhibits over-abstraction, and by evaluating some common logic gates. These two contributions can be used independently to help automate the hard- ware verification by STE, but they also complement each other. To show this, we combine both algorithms to create a fully automatic abstraction discovery and refinement loop. The only inputs required are the hardware design and the specification, which the design should meet. While only small circuits could be verified completely automatically, it clearly shows that our two contributions allow the construction of a verification framework that does not require any user interaction.
48

Le choix des architectures hybrides, une stratégie réaliste pour atteindre l'échelle exaflopique. / The choice of hybrid architectures, a realistic strategy to reach the Exascale.

Loiseau, Julien 14 September 2018 (has links)
La course à l'Exascale est entamée et tous les pays du monde rivalisent pour présenter un supercalculateur exaflopique à l'horizon 2020-2021.Ces superordinateurs vont servir à des fins militaires, pour montrer la puissance d'une nation, mais aussi pour des recherches sur le climat, la santé, l'automobile, physique, astrophysique et bien d'autres domaines d'application.Ces supercalculateurs de demain doivent respecter une enveloppe énergétique de 1 MW pour des raisons à la fois économiques et environnementales.Pour arriver à produire une telle machine, les architectures classiques doivent évoluer vers des machines hybrides équipées d'accélérateurs tels que les GPU, Xeon Phi, FPGA, etc.Nous montrons que les benchmarks actuels ne nous semblent pas suffisants pour cibler ces applications qui ont un comportement irrégulier.Cette étude met en place une métrique ciblant les aspects limitants des architectures de calcul: le calcul et les communications avec un comportement irrégulier.Le problème mettant en avant la complexité de calcul est le problème académique de Langford.Pour la communication nous proposons notre implémentation du benchmark du Graph500.Ces deux métriques mettent clairement en avant l'avantage de l'utilisation d'accélérateurs, comme des GPUs, dans ces circonstances spécifiques et limitantes pour le HPC.Pour valider notre thèse nous proposons l'étude d'un problème réel mettant en jeu à la fois le calcul, les communications et une irrégularité extrême.En réalisant des simulations de physique et d'astrophysique nous montrons une nouvelle fois l'avantage de l'architecture hybride et sa scalabilité. / The countries of the world are already competing for Exascale and the first exaflopics supercomputer should be release by 2020-2021.These supercomputers will be used for military purposes, to show the power of a nation, but also for research on climate, health, physics, astrophysics and many other areas of application.These supercomputers of tomorrow must respect an energy envelope of 1 MW for reasons both economic and environmental.In order to create such a machine, conventional architectures must evolve to hybrid machines equipped with accelerators such as GPU, Xeon Phi, FPGA, etc.We show that the current benchmarks do not seem sufficient to target these applications which have an irregular behavior.This study sets up a metrics targeting the walls of computational architectures: computation and communication walls with irregular behavior.The problem for the computational wall is the Langford's academic combinatorial problem.We propose our implementation of the Graph500 benchmark in order to target the communication wall.These two metrics clearly highlight the advantage of using accelerators, such as GPUs, in these specific and representative problems of HPC.In order to validate our thesis we propose the study of a real problem bringing into play at the same time the computation, the communications and an extreme irregularity.By performing simulations of physics and astrophysics we show once again the advantage of the hybrid architecture and its scalability.
49

Resource-oriented architecture based scientific workflow modelling

Duan, Kewei January 2016 (has links)
This thesis studies the feasibility and methodology of applying state-of-the-art computer technology in scientific workflow modelling, within a collaborative environment. The collaborative environment also indicates that the people involved include non-computer scientists or engineers from other disciplines. The objective of this research is to provide a systematic methodology based on a web environment for the purpose of lowering the barriers brought by the heterogeneous features of multi-institutions, multi-platforms and geographically distributed resources which are implied in the collaborative environment of scientific workflow.
50

A type-safe apparatus executing higher order functions in conjunction with hardware error tolerance

Kimmitt, Jonathan R. R. January 2015 (has links)
The increasing commoditization of computers in modern society has exceeded the pace of associated developments in reliability. Although theoretical computer science has advanced greatly in the last thirty years, many of the best techniques have yet to find their way into embedded computers, and their failure can have a great potential for disrupting society. This dissertation presents some approaches to improve computer reliability using software and hardware techniques, and makes the following claims for novelty: innovative development of a toolchain and libraries to support extraction from dependent type checking in a theorem prover; conceptual design and deployment in reconfigurable hardware; an extension of static type-safety to hardware description language and FPGA level; elimination of legacy C code from the target and toolchain; a novel hardware error detection scheme is described and compared with conventional triple modular redundancy. The elimination of any user control of memory management promotes robustness against buffer overruns, and consequently prevents vulnerability to common Trojan techniques. The methodology identifies type punning as a key weakness of commonly encountered embedded languages such as C, in particular the extreme difficulty of determining if an array access is in bounds, or if dynamic memory has been properly allocated and released. A method of eliminating dependence on type-unsafe libraries is presented, in conjunction with code that has optionally been proved correct according to user-defined criteria. An appropriately defined subset of OCaml is chosen with support for the Coq theorem prover in mind, and then evaluated with a custom backend that supports behavioural Verilog, as well as a fixed execution unit and associated control store. Results are presented for this alternative platform for reliable embedded systems development that may be used in future industrial flows. To provide assurance of correct operation, the proven software needs to be executed in an environment where errors are checked and corrected in conjunction with appropriate exception processing in the event of an uncorrectable error. Therefore, the present author’s previously published error detection scheme based on dual-rail logic and self-checking checkers is further developed and compared with traditional N-modular redundancy.

Page generated in 0.0145 seconds