• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 7
  • 1
  • Tagged with
  • 42
  • 6
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Normative approach to information systems modelling

Cordeiro, José A. M. January 2010 (has links)
Information and communication technologies (ICT) and computer systems m general are increasingly integrating daily life and- becoming an essential element of human and organisational existence in modem societies. The Information Systems (IS) field in particular is interested in enhancing processes and increasing the utility of information to organisations and their members by using these technologies. However, most IS development methodologies are technologically biased by employing engineering approaches originated in software engineering and related fields to the analysis, design and implementation of those systems forgetting human and organisational nature of information and IS. These methodologies fail to: (i) acknowledge properly the role of humans and its associated social, cultural, political and behavioural dimensions (ii) understand the real interplay between human and technology and (iii) provide a sound and appropriate philosophical foundation. This thesis is mainly built upon the work and findings from three different but related theories that take the human as a central element of any IS, namely Organisational Semiotics (OS), the Theory of Organized Activity (TOA) and Enterprise Ontology (EO), and broader and related theories of respectively, Semiotics, Activity Theory and Language Action Perspective. In this research a deep analysis is undertaken regarding these theories to explore and compare their fundamental aspects and to derive their essential elements. This research proposes a new intellectual framework originated in a new philosophical foundation - Human Relativism - that adopts the human as the central element and provides a new paradigm as a basis for any methodology for IS development. A new approach - NOMIS (NOrmative Modelling of Information Systems) - is introduced, applying the new paradigm, centred in human behaviour and human action in particular, that integrates the theoretical views of OS, TOA and EO. For modelling and representation purposes a new modelling notation and an UML profile extension of the Unified Modelling Language (UML) was created for expressing and communicate the fundamental views of NOMIS. Finally, two case studies were used to 1) demonstrate the " feasibility and applicability of NOMIS for modelling a business domain and, 2) show the key concepts of NOMIS applied to the design of a computer application. Conclusions and future work completes this thesis.
12

An iterative approach to automation for system management

McLarnon, Barry Paul January 2013 (has links)
Automated system management solutions aim to reduce the pressure on the administrators of complex, large-scale, distributed systems by enabling the automation of many of the common operations of management. However, this creates a level of abstraction, which can act as a barrier between the administrator and the elements being controlled. This can contribute to a loss of trust in the management solution, and may lead to a loss of control of the managed environment. This thesis proposes a novel approach to system management called Iterative Automation that allows the administrator to define how a management task is performed, and enable the task to become automated in steps by providing more detail about what causes the task to be performed and what parameters it should use. The solution also allows administrators to define relevant task output that can be analysed for fault states and enable error recovery without administrator intervention. To compare this novel approach against existing management solutions, a novel evaluation methodology was created based on a set of non-functional requirements derived from the relevant I,iterature. This evaluation showed that while the Iterative Automation approach carries an initial overhead in human effort to enable management tasks to become automated, the level of effort decreases sharply. Meanwhile, the level of trustability and controllability that it offers is significantly higher than that of other automated approaches
13

Computation over metadata in knowledge transfer systems

Wilson, John N. January 2006 (has links)
No description available.
14

Modelling and verification of embedded systems based on Petri net oriented representations

Varea, Mauricio January 2003 (has links)
Driven by the demand for more functionality, the complexity involved in the design of embedded systems continues to increase. This has lead to a progressive increase in the amount of control and data flow that current embedded systems need to deal with. This dissertation addresses the interaction between these two domains and investigates its influence on the design of embedded systems, in terms of overall design cost. The first part of this dissertation presents the formalisation of a new design representation, called Dual Flow Net (DFN), which provides a tight control and data flow interaction. This is achieved by means of two new concepts. Firstly, the structure of the new DFN model is formulated employing a tripartite graph, as opposed to previous approaches based on a bipartite graph. Such a structure allows the use of a unique semantics to model the control flow, data flow, and its interactions. Secondly, a marking scheme that captures the changes in the state of the system produced by the separated effects of control and data flow is described. The analysis of behavioural properties using such a marking is proposed, and illustrative examples are given. The second part of this dissertation is concerned with the verification of DFN models through formal methods. A new set of algorithms for the symbolic model checking of DFN models is proposed. Behavioural properties of embedded systems, such as reachability, safety and liveness, are verified, using both Computation Tree Logic (CTL) and Linear Temporal Logic (LTL) formulae. The description of a new estimation method is provided, which is capable of allocating resources to the verification process efficiently, hence dealing with the state explosion problem. The algorithms and estimation method have been validated by examples of varying complexity, ranging from simple systems, in order to understand the modelling and verification principles, up to complex arrangements that depict real-life embedded systems, including an Ethernet coprocessor. The final part of this dissertation investigates the applicability of DFN models to the co-synthesis of hardware/software systems, as a potential application of the new design representation. It has been shown how the DFN model provides a flexible design framework for system-level trade-offs in the generated solution.
15

A design framework for pervasive computing systems

Kostakos, Vassilis January 2004 (has links)
No description available.
16

Vers une nouvelle génération de systèmes de test et de simulation avionique dynamiquement reconfigurables / Toward a new generation of dynamically reconfigurable avionic test and simulation systems

Afonso, George 02 July 2013 (has links)
L'objectif de cette thèse était l’apport de nouvelles solutions dans le domaine des systèmes de test et de simulation avioniques et ce, à plusieurs niveaux. Dans un premiers temps, nous avons proposé un modèle d’exécution dynamique permettant d’unifier les métiers du test et de la simulation, de répondre aux contraintes imposées, d’apporter de nouvelles possibilités et ainsi d'accélérer le cycle de développement des futurs équipements embarqués. Ensuite, un support matériel basé sur une architecture hétérogène CPU-FPGA répondant à l’ensemble des contraintes a été défini afin de répondre à la problématique proposée et aux contraintes imposées par le domaine d’application telles que le respect du temps-réel et la capacité de reconfiguration dynamique hétérogène. A ce support matériel, est venu s’ajouter une méthodologie de développement permettant une meilleure prise en charge du code "legacy" de l’industriel. Enfin, un environnement unifié temps réel mou pour le test et la simulation avionique a été mis en avant, permettant de diminuer les coûts liés à la maîtrise et à la maintenance d'un nouvel environnement. Finalement, une étude de cas a permis de mettre en avant les capacités de reconfiguration dynamique et les possibilités de l’environnement développé. / The aim of this thesis is the proposal of new solutions in the field of avionic test and simulation systems at several levels. First, we have proposed a dynamic execution model which will allow unifying test and simulation phases, to meet the requirements address by the application domain, to bring new capabilities and to improve the design cycle of new helicopter embedded systems. Then, a hardware support based on a heterogeneous CPU-FPGA architecture has been defined in order to address the proposed issue and to satisfy all the constraints required by the application domain such as the respect of the real-time and to support a heterogeneous and a dynamic execution model. With this hardware support, we defined a development methodology for a better re-use and of the industrial code legacy support. Finally, a unified software real-time environment for avionic test and simulation has been proposed. It is based on standard technologies in order to reduce the associated management and maintenance costs. Finally, the effectiveness, the dynamic reconfiguration capabilities, and the performances of the developed environment are highlighted through a typical case study.
17

A methodology to develop high performance applications on GPGPU architectures : application to simulation of electrical machines / Une méthodologie pour le développement d’applications hautes performances sur des architectures GPGPU : application à la simulation des machines électriques

Oliveira Rodrigues, Antonio Wendell de 26 January 2012 (has links)
Les phénomènes physiques complexes peuvent être simulés numériquement par des techniques mathématiques. Ces simulations peuvent mener ainsi à la résolution de très grands systèmes. La parallélisation des codes de simulation numérique est alors une nécessité pour parvenir à faire ces simulations en des temps non-exorbitants. Le parallélisme s’est imposé au niveau des architectures de processeurs et les cartes graphiques sont maintenant utilisées pour des fins de calcul généraliste, aussi appelé "General-Purpose GPU", avec comme avantage évident l’excellent rapport performance/prix. Cette thèse se place dans le domaine de la conception de ces applications hautes-performances pour la simulation des machines électriques. Nous fournissons une méthodologie basée sur l’Ingénierie Dirigées par les Modèles (IDM) qui permet de modéliser une application et l’architecture sur laquelle l’exécuter, afin de générer un code OpenCL. Notre objectif est d’aider les spécialistes en algorithmes de simulations numériques à créer un code efficace qui tourne sur les architectures GPGPU. Pour cela, une chaine de compilation de modèles qui prend en compte plusieurs aspects du modèle de programmation OpenCL est fournie. De plus, nous fournissons des transformations de modèles qui regardent des niveaux d’optimisations basées sur les caractéristiques de l’architecture.Comme validation expérimentale, la méthodologie est appliquée à la création d’une application qui résout un système linéaire issu de la Méthode des Éléments Finis. Dans ce cas nous montrons, entre autres, la capacité de la méthodologie de passer à l’échelle par une simple modification de la multiplicité des unités GPU disponibles. / Complex physical phenomena can be numerically simulated by mathematical techniques. Usually, these techniques are based on discretization of partial differential equations that govern these phenomena. Hence, these simulations enable the solution of large-scale systems. The parallelization of algorithms of numerical simulation, i.e., their adaptation to parallel processing architectures, is an aim to reach in order to hinder exorbitant execution times. The parallelism has been imposed at the level of processor architectures and graphics cards are now used for purposes of general calculation, also known as "General- Purpose GPU". The clear benefit is the excellent performance/price ratio. This thesis addresses the design of high-performance applications for simulation of electrical machines. We provide a methodology based on Model Driven Engineering (MDE) to model an application and its execution architecture in order to generate OpenCL code. Our goal is to assist specialists in algorithms of numerical simulations to create a code that runs efficiently on GPGPU architectures. To ensure this, we offer a compilation model chain that takes into account several aspects of the OpenCL programming model. In addition, we provide model transformations that analyze some levels of optimizations based on the characteristics of the architecture. As an experimental validation, the methodology is applied to the creation of an application that solves a linear system resulting from the Finite Element Method (FEM). In this case, we show, among other things, the ability of the methodology of scaling by a simple modification of the number of available GPU devices.
18

Supporting organisational semiotics with natural language processing techniques

Cosh, Kenneth John January 2003 (has links)
No description available.
19

An analogue approach for the processing of biomedical signals

Mangieri, Eduardo January 2012 (has links)
Constant device scaling has signifcantly boosted electronic systems design in the digital domain enabling incorporation of more functionality within small silicon area and at the same time allows high-speed computation. This trend has been exploited for developing high-performance miniaturised systems in a number of application areas like communication, sensor network, main frame computers, biomedical information processing etc. Although successful, the associated cost comes in the form of high leakage power dissipation and systems reliability. With the increase of customer demands for smarter and faster technologies and with the advent of pervasive information processing, these issues may prove to be limiting factors for application of traditional digital design techniques. Furthermore, as the limit of device scaling is nearing, performance enhancement for the conventional digital system design methodology cannot be achieved any further unless innovations in new materials and new transistor design are made. To this end, an alternative design methodology that may enable performance enhancement without depending on device scaling is much sought today. Analogue design technique is one of these alternative techniques that have recently gained considerable interests. Although it is well understood that there are several roadblocks still to be overcome for making analogue-based system design for information processing as the main-stream design technique (e.g., lack of automated design tool, noise performance, efficient passive components implementation on silicon etc.), it may offer a faster way of realising a system with very few components and therefore may have a positive implication on systems performance enhancement. The main aim of this thesis is to explore possible ways of information processing using analogue design techniques in particular in the field of biomedical systems.
20

A design framework for identifying optimum services using choreography and model transformation

Alahmari, Saad January 2012 (has links)
Service Oriented Architecture (SOA) has become an effective approach for implementing loosely-coupled and flexible systems based on a set of services. However, despite the increasing popularity of the SOA approach, no comprehensive methodology is currently available to identify “optimum” services. Difficulties include the abstraction gap between the business process model and service interface design as well as service quality trade-offs that affect the identification of the “optimum” services. The selection of these “optimum” services implies that SOA implementation should be driven by the business model and should also consider the appropriate level of granularity. The objective of this thesis is to identify the optimum service interface designs by bridging the abstraction gap and balancing the trade-offs between service quality attributes. This thesis proposes a framework using the choreography concept to bridge the abstraction gap between the business process model and service interface design together with service quality metrics to evaluate service quality attributes. The framework generates the service interface design automatically based on a chain of model transformations from a business process model through the use of the choreography concept (service choreography model). The framework also develops a service quality model to measure service granularity and service quality attributes of complexity, cohesion and coupling. These measurements aim to evaluate service interface designs and then select the optimum service interface design. Throughout this thesis, a pragmatic approach is used to validate the transformation models applying three application scenarios and evaluating consistency. The service quality model will be evaluated empirically using the generated service interface designs. Despite several remaining challenges for service-oriented systems to identify “optimum” services, this thesis demonstrates that optimum services can be effectively identified using the new framework, as explained herein.

Page generated in 0.0273 seconds