• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

QoS-CARE : a reliable system for preserving QoS contracts through dynamic reconfiguration / QoS-CARE : un système fiable pour préserver les contrats de qualité de service dans une reconfiguration dynamique

Tamura, Gabriel 28 May 2012 (has links)
Le principal défi de cette thèse est de préserver de façon fiable la qualité de service (QoS) définie par contrats dans des systèmes logiciels à base de composants, ceci dans des conditions changeantes d'exécution du système. En réponse à ce défi, nous présentons deux contributions. La première est un modèle pour les applications logicielles à base de composants avec contrats de qualité de service et règles de reconfiguration définies par des graphes attribués typés. Ainsi, nous utilisons des modèles formels à l'exécution pour reconfigurer de manière fiable des applications logicielles de façon à préserver les contrats de QoS. Plus précisément, nous montrons la faisabilité d'exploiter des patrons de conception à l'exécution dans des boucles de reconfiguration tout en garantissant les niveaux de QoS attendues. Nous mettons en œuvre ce modèle formel par le biais d'une architecture à base de composants qui peut être utilisée comme une couche supplémentaire de la plateforme SCA, ceci afin de préserver les contrats de QoS.La seconde contribution est la caractérisation des propriétés d'adaptation pour évaluer les systèmes logiciels auto-adaptatifs de manière standardisée et comparable. De par leur nature, les mécanismes d'adaptation des systèmes logiciels auto-adaptatifs sont essentiellement des boucles de rétroaction telles que définies par la théorie du contrôle. Ainsi, pour les évaluer, il est nécessaire de ré-interpréter ces proprétés dans le domaine du logiciel. Nous définissons la fiabilité de la réalisation de notre modèle formel en termes de sous-ensemble des propriétés d'adaptation caractérisées, et nous montrons que ces propriétés sont garanties dans cette réalisation. / The main challenge of this Thesis is to reliably preserve quality of service (QoS) contracts in component-based software systems under changing conditions of system execution. In response to this challenge, the presented contribution is twofold. The first is a model for component-based software applications, QoS contracts and reconfiguration rules as typed attributed graphs, and the definition of QoS-contracts semantics as state machines in which transitions are performed as software reconfigurations. Thus, we effectively use (formal) models at runtime to reliably reconfigure software applications for preserving its QoS contracts. More specifically, we show the feasibility of exploiting design patterns at runtime in reconfiguration loops to fulfill expected QoS levels associated to specific context conditions. We realize this formal model through a component-based architecture and implementation that can be used as an additional layer of SCA middleware stacks to preserve the QoS contracts of executed applications.The second contribution is the characterization of adaptation properties to evaluate self-adaptive software systems in a standardized and comparable way. By its own nature, the adaptation mechanisms of self-adaptive software systems are essentially feedback loops as defined in control theory. Thus, it results reasonable to evaluate them using the standard properties used to evaluate feedback loops, re-interpreting these properties for the software domain. We define the relibility of our formal model realization in terms of a subset of the characterized adaptation properties, and we show that these properties are guaranteed in this realization.
162

Security for mobile grid systems

Alwada’n, Tariq Falah January 2012 (has links)
Grid computing technology is used as inexpensive systems to gather and utilize computational capability. This technology enhances applications services by arranging machines and distributed resources in a single huge computational entity. A Grid is a system that has the ability to organize resources which are not under the subject of centralized domain, utilize protocols and interfaces, and supply high quality of service. The Grid should have the ability to enhance not only the systems performance and job throughput of the applications participated but also increase the utilization scale of resources by employing effective resource management methods to the huge amount of its resources. Grid mobility appears as a technology to facilitate the accomplishment of requirements for Grid jobs as well as Grid users. This idea depends on migrating or relocating jobs, data and application software among Grid nodes. However, making use of mobility technology leads to data confidentiality problems within the Grid. Data confidentiality is the protection of data from intruders’ attacks. The data confidentiality can be addressed by limiting the mobility to trusted parts of the Grid, but this solution leads to the notion of Virtual Organizations (VOs). Also as a result of mobility technology the need for a tool to organize and enforce policies while applying the mobility has been increased. To date, not enough attention has been paid to policies that deal with data movements within the Grid. Most existing Grid systems have support only limited types of policies (e.g. CPU resources). A few designs consider enforcing data policies in their architecture. Therefore, we propose a policy-managed Grid environment that addresses these issues (user-submitted policy, data policy, and multiple VOs). In this research, a new policy management tool has been introduced to solve the mobility limitation and data confidentiality especially in the case of mobile sharing and data movements within the Grid. We present a dynamic and heterogeneous policy management framework that can give a clear policy definition about the ability to move jobs, data and application software from nodes to nodes during jobs’ execution in the Grid environment. This framework supports a multi-organization environment with different domains, supports the external Grid user preferences along with enforces policies for data movements and the mobility feature within different domains. The results of our research have been evaluated using Jade simulator, which is a software framework fully implemented in Java language and allows agents to execute tasks defined according to the agent policy. The simulation results have verified that the research aims enhance the security and performance in the Grid environments. They also show enhanced control over data and services distribution and usage and present practical evidence in the form of scenario test-bed data as to the effectiveness of our architecture.
163

A meta-modelling language definition for specific domain

Liang, Zhihong January 2009 (has links)
Model Driven software development has been considered to be a further software construction technology following object-oriented software development methods and with the potential to bring new breakthroughs in the research of software development. With deepening research, a growing number of Model Driven software development methods have been proposed. The model is now widely used in all aspects of software development. One key element determining progress in Model Driven software development research is how to better express and describe the models required for various software components. From a study of current Model Driven development technologies and methods, Domain-Specific Modelling is suggested in the thesis as a Model Driven method to better realise the potential of Model-Driven Software Development. Domain-specific modelling methods can be successfully applied to actual software development projects, which need a flexible and easy to extend, meta-modelling language to provide support. There is a particular requirement for modelling languages based on domain-specific modelling methods in Meta-modelling as most general modelling languages are not suitable. The thesis focuses on implementation of domain-specific modelling methods. The "domain" is stressed as a keystone of software design and development and this is what most differentiates the approach from general software development process and methods. Concerning the design of meta-modelling languages, the meta-modelling language based on XML is defined including its abstract syntax, concrete syntax and semantics. It can support description and construction of the domain meta-model and the domain application model. It can effectively realise visual descriptions, domain objects descriptions, relationships descriptions and rules relationships of domain model. In the area of supporting tools, a meta-meta model is given. The meta-meta model provides a group of general basic component meta-model elements together with the relationships between elements for the construction of the domain meta-model. It can support multi-view, multi-level description of the domain model. Developers or domain experts can complete the design and construction of the domain-specific meta-model and the domain application model in the integrated modelling environment. The thesis has laid the foundation necessary for research in descriptive languages through further study in key technologies of meta-modelling languages based on Model Driven development.
164

An approach to implementing cloud service oriented legacy application evolution

Zheng, Shang January 2013 (has links)
An emerging IT delivery model, Cloud Computing, can significantly reduce IT costs and complexities while improving workload optimisation and service delivery. More and more organisations are planning to migrate their existing systems into this internet-driven computing environment. This investigation is proposed for this purpose and will be undertaken with two main aims. The first aim is to establish a general framework and method to assist with the evolution of legacy systems into and within the Cloud environment. The second aim is to evaluate the proposed approach and demonstrate that such an approach can be more effective than developing Cloud services from scratch. The underlying research procedure of this thesis consists of observation, proposition, test and conclusion. This thesis contributes a novel evolution approach in Cloud computing. A technical solution framework is proposed through a three-dimensional software evolution paradigm, which can cover the relationships of software models, software functions and software qualities in different Cloud paradigms. Finally, the evolved service will be run in the Cloud environments. The approach framework is implemented by three phases: 1) legacy system analysis and extraction, which proposes an analysis approach to decide the legacy system in the Cloud environment and to adopt the techniques of program slicing with improved algorithm and software clustering for extracting legacy components. 2) Cloud-oriented service migration including evolving software into and within Cloud. The process of evolving software 'INTO' Cloud can be viewed mainly as changing software qualities on software models. The process of evolving software 'WITHIN' Cloud can be viewed mainly as changing software functions on software models, the techniques of program and model transformation and software architecture engineering are applied. 3) Cloud service integration, which integrates and deploys the service in the Cloud environment. The proposed approach is proved to be flexible and practical by the selected case study. Conclusions based on analysis and future research are discussed at the end of the thesis.
165

Ανάπτυξη χρονοπρογραμματιστή με τυχαίες επιλογές

Τόλλος, Αθανάσιος 10 March 2014 (has links)
Ο σύγχρονος κόσμος των δικτύων και του internet απαιτεί πολύ υψηλές ταχύτητες διασυνδέσεων στα διάφορα δίκτυα. Ξεκινώντας ακόμα και από τα οικιακά δίκτυα και τα τοπικά δίκτυα (LAN), στα πανεπιστημιακά δίκτυα (campus networks), στα μητροπολιτικά δίκτυα (MAN), στα δίκτυα ευρύτερης περιοχής (WAN) και στα δίκτυα κορμού του internet (core networks). Σε όλα αυτά τα δίκτυα χρησιμοποιούνται κατά κόρον μεταγωγείς (switches) και δρομολογητές (routers) προκειμένου να μεταφέρουν την δικτυακή πληροφορία από την αφετηρία της στον προορισμό της διασχίζοντας πληθώρα άλλων δικτύων. Πυρήνα των μεταγωγέων και των δρομολογητών αποτελεί ο χρονοπρογραμματιστής, ένας αλγόριθμος δηλαδή υλοποιημένος στο hardware της εκάστοτε συσκευής, που αποφασίζει την προώθηση της πληροφορίας από την είσοδό της στην έξοδό της, αφού προηγουμένως έχει καθοριστεί με άλλο μηχανισμό η θύρα εξόδου της πληροφορίας. Η σημασία του χρονοπρογραμματιστή γίνεται φανερή από την πληθώρα προβλημάτων που πρέπει να επιλύσει. Επιλεκτικά, κάποια από τα προβλήματα είναι ο ανταγωνισμός εισόδων για την ίδια έξοδο, το ταίριασμα εισόδων – εξόδων, η ελάχιστη δυνατόν καθυστέρηση στην διερχόμενη πληροφορία, η σταθερότητα λειτουργίας, η μεγιστοποίηση της διαμεταγωγής (throughput), η δικαιοσύνη στην εξυπηρέτηση εισόδων και εξόδων, κ.α. Στην παρούσα διπλωματική παρουσιάζεται η οικογένεια αλγορίθμων χρονοπρογραμματισμού ROLM (Randomized On-Line Matching), η οποία υλοποιεί τυχαιότητα με αποδοτικό και αποτελεσματικό τρόπο. Οι επιδόσεις αυτές φαίνονται στη μικρή καθυστέρηση στην προώθηση πακέτων (packet forwarding), επιτυγχάνοντας έτσι υψηλή διαμεταγωγή (throughput) και στα χαρακτηριστικά δικαιοσύνης που προσφέρουν, σε σχέση με τις υπάρχουσες ανταγωνιστικές υλοποιήσεις, που δεν χρησιμοποιούν τυχαιότητα αλλά ντετερμινιστικές μεθόδους απόφασης. Τα αποτελέσματα αυτά οφείλονται στο βασικό αλγόριθμο της οικογένειας ROLM, τον Ranking, o οποίος υπολογίζει μέγιστο ταίριασμα εισόδων – εξόδων. Οι αλγόριθμοι αυτοί επιλέγουν τυχαία εισόδους για προώθηση στις εξόδους που ζητούν, επιλογή η οποία μπορεί να οδηγήσει σε χρονοπρογραμματιστές υψηλών ταχυτήτων, ταχύτητες που ορίζει η εκάστοτε τεχνολογία υλοποίησης και η ταχύτητα των συνδέσμων δικτύου. Ο αλγόριθμος Ranking υλοποιείται σε software και σε hardware (υλικό), στην πλατφόρμα FPSLIC της ATMEL. Η πλατφόρμα αυτή περιέχει έναν 8μπιτο επεξεργαστή, τον AVR, και ένα προγραμματιζόμενο πίνακα πυλών (FPGA) στην ίδια πλακέτα κατασκευασμένα με την ίδια τεχνολογία. Έτσι, οι μετρήσεις των δύο υλοποιήσεων είναι συγκρίσιμες. Το πρόγραμμα που αναπτύσσεται, τόσο για την software όσο και για την hardware υλοποίηση, δέχεται ως παράμετρο το μέγεθος του μεταγωγέα. Έτσι, μετρώνται και συγκρίνονται χαρακτηριστικά όπως η ταχύτητα, ο χρόνος απόφασης, η επιφάνεια και το πλήθος θυρών I/O, για μεταγωγείς μεγέθους τεσσάρων εισόδων και τεσσάρων εξόδων (4x4), 8x8, 16x16 και 32x32. / --
166

An animated pedagogical agent for assisting novice programmers within a desktop computer environment

Case, Desmond Robert January 2012 (has links)
Learning to program for the first time can be a daunting process, fraught with difficulty and setback. The novice learner is faced with learning two skills at the same time each that depends on the other; they are how a program needs to be constructed to solve a problem and how the structures of a program work towards solving a problem. In addition the learner has to develop practical skills such as how to design a solution, how to use the programming development environment, how to recognise errors, how to diagnose their cause and how to successfully correct them. The nature of learning how to program a computer can cause frustration to many and some to disengage before they have a chance to progress. Numerous authorities have observed that novice programmers make the same mistakes and encounter the same problems when learning their first programming language. The learner errors are usually from a fixed set of misconceptions that are easily corrected by experience and with appropriate guidance. This thesis demonstrates how a virtual animated pedagogical agent, called MRCHIPS, can extend the Beliefs-Desires-Intentions model of agency to provide mentoring and coaching support to novice programmers learning their first programming language, Python. The Cognitive Apprenticeship pedagogy provides the theoretical underpinning of the agent mentoring strategy. Case-Based Reasoning is also used to support MRCHIPS reasoning, coaching and interacting with the learner. The results indicate that in a small controlled study when novice learners are assisted by MRCHIPS they are more productive than those working without the assistance, and are better at problem solving exercises, there are also manifestations of higher of degree of engagement and learning of the language syntax.
167

Requirement validation with enactable descriptions of use cases

Kanyaru, J. M. January 2006 (has links)
The validation of stakeholder requirements for a software system is a pivotal activity for any nontrivial software development project. Often, differences in knowledge regarding development issues, and knowledge regarding the problem domain, impede the elaboration of requirements amongst developers and stakeholders. A description technique that provides a user perspective of the system behaviour is likely to enhance shared understanding between the developers and stakeholders. The Unified Modelling Language (UML) use case is such a notation. Use cases describe the behaviour of a system (using natural language) in terms of interactions between the external users and the system. Since the standardisation of the UML by the Object Management Group in 1997, much research has been devoted to use cases. Some researchers have focussed on the provision of writing guidelines for use case specifications whereas others have focussed on the application of formal techniques. This thesis investigates the adequacy of the use case description for the specification and validation of software behaviour. In particular, the thesis argues that whereas the user-system interaction scheme underpins the essence of the use case notation, the UML specification of the use case does not provide a mechanism by which use cases can describe dependencies amongst constituent interaction steps. Clarifying these issues is crucial for validating the adequacy of the specification against stakeholder expectations. This thesis proposes a state-based approach (the Educator approach) to use case specification where constituent events are augmented with pre and post states to express both intra-use case and inter-use case dependencies. Use case events are enacted to visualise implied behaviour, thereby enhancing shared understanding among users and developers. Moreover, enaction provides an early "feel" of the behaviour that would result from the implementation of the specification. The Educator approach and the enaction of descriptions are supported by a prototype environment, the EducatorTool, developed to demonstrate the efficacy and novelty of the approach. To validate the work presented in this thesis an industrial study, involving the specification of realtime control software, is reported. The study involves the analysis of use case specifications of the subsystems prior to the application of the proposed approach, and the analysis of the specification where the approach and tool support are applied. This way, it is possible to determine the efficacy of the Educator approach within an industrial setting.
168

Combining data driven programming with component based software development : with applications in geovisualisation and dynamic data driven application systems

Jones, Anthony Andrew January 2008 (has links)
Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations.
169

Traçabilité pour la mise au point de modèles et la correction de transformations / Traceability to adjust models and correct transformations

Aranega, Vincent 28 November 2011 (has links)
L'accroissement de la complexité des systèmes, des architectures matérielles et de la façon de les programmer a donné lieu à l'utilisation de nouveaux paradigmes pour simplifier leurs développements. Dans ce cadre, l'ingénierie dirigée par les modèles (IDM) propose de monter en abstraction pour aider le concepteur à se concentrer sur les fonctionnalités de son système plutôt que sur les détails d'implémentation. Dans ce cadre, les transformations de modèles et plus particulièrement les chaînes de compilation IDM proposent de soulager le concepteur en générant automatiquement le code d'une application à partir de modèles de conception. Néanmoins, l'application générée ne possède pas toujours le comportement ou les performances adéquates. Dans cette thèse, nous avons proposé deux approches pour la correction et l'optimisation de modèles reposant sur l'utilisation de la trace de transformations de modèles. Ces travaux reposent sur l'hypothèse forte d'une chaîne de transformations digne de confiance. Afin de construire une telle chaîne, il est important de tester les différentes transformations la composant. Afin d'aider au mieux les développeurs de chaînes lors des phases de test, nous avons fourni un moyen de localiser les erreurs dans une transformation et une chaîne de transformation. Nous avons aussi proposé un assistant afin d'aider à la mise en oeuvre de la technique d'analyse de mutation, technique en grande partie manuelle. Ces travaux ont été implémentés dans l'environnement Gaspard, un environnement de co-design de systèmes embarqués reposant sur l'utilisation de chaînes de compilations IDM. / The increasing complexity of systems, hardware architectures and how to program them, leads to new paradigms in order to simplify developments. In this context, model-driven engineering (MDE) proposes to work with abstract representation of a system to help the designers to focus on the features of their systems rather than on implementation details. In this context, model transformations, and especially MDE compilation chain, offers to relieve the designers by offering them an automatic code generation from high-level models. However, the generated application does not always have the expected behavior or performance. In this thesis, we propose two approaches for the correction and optimization models based on model traceability. This work is based on the strong assumption that the transformation chain is trustworthy. To build such a chain, it is important to test the transformations that composed it. In order to help the chain developers when building an MDE compilation chain, we provide a way to locate errors in transformations and transformation chains. We also propose an assistant to help the developers during the mutation analysis technique which remains manual. This work was implemented in Gaspard, a co-design environment of embedded systems based on MDE compilation chain.
170

Enabling collaborative modelling for a multi-site model-driven software development approach for electronic control units

Grimm, Frank January 2012 (has links)
An important aspect of support for distributed work is to enable users at different sites to work collaboratively, across different sites, even different countries but where they may be working on the same artefacts. Where the case is the design of software systems, design models need to be accessible by more than one modeller at a time allowing them to work independently from each other in what can be called a collaborative modelling process supporting parallel evolution. In addition, as such design is a largely creative process users are free to create layouts which appear to better depict their understanding of certain model elements presented in a diagram. That is, that the layout of the model brings meaning which exceed the simple structural or topological connections. However, tools for merging such models tend to do so from a purely structural perspective, thus losing an important aspect of the meaning which was intended to be conveyed by the modeller. This thesis presents a novel approach to model merging which allows the preservation of such layout meaning when merging. It first presents evidence from an industrial study which demonstrates how modellers use layout to convey meanings. An important finding of the study is that diagram layout conveys domain-specific meaning and is important for modellers. This thesis therefore demonstrates the importance of diagram layout in model-based software engineering. It then introduces an approach to merging which allows for the preservation of domain-specific meaning in diagrams of models, and finally describes a prototype tool and core aspects of its implementation.

Page generated in 0.0342 seconds