• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 354
  • 85
  • 42
  • 24
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 9
  • 7
  • 4
  • 3
  • 2
  • Tagged with
  • 715
  • 715
  • 408
  • 303
  • 302
  • 213
  • 120
  • 106
  • 96
  • 95
  • 94
  • 84
  • 59
  • 58
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Le Débogage à Distance et la Réflexion dans les Dispositifs à Ressources Limitées

Papoulias, Nikolaos 19 December 2013 (has links) (PDF)
La construction de logiciels pour des appareils qui ne peuvent pas accueillir localement des outils de développement peut être difficile. Ces appareils soit ont une puissance de calcul trop limitée pour exécuter un IDE (par exemple, smartphones), ou manquent d' interfaces d'entrée / sortie appropriées (écran, clavier , souris) pour la programmation (par exemple, les robots mobiles) ou sont tout simplement inaccessibles pour des développements locaux (par exemple cloud - serveurs). Dans ces situations, les développeurs ont besoin d'une infrastructure appropriée pour développer et déboguer des applications distantes. Des solutions de débogage à distance sont parfois délicates à utiliser en raison de leur nature distribuée. Les études empiriques nous montrent que, en moyenne 10,5 minutes par heure de codage (plus de cinq semaines de travail de 40 heures par an) sont passées pour le re-déploiement d'applications pour corriger les bugs ou améliorer leur fonctionnalité [ZeroTurnAround 2011]. En plus, les solutions courantes manquent des aménagements qui seraient autrement disponibles dans un contexte local, car c'est difficile de les reproduire à distance (par exemple débogage objet-centré [Ressia 2012b]). Cet état influe sur la quantité d' expérimentation au cours d'une session de débogage à distance - par rapport à un contexte local. Dans cette thèse, afin de surmonter ces problèmes, nous identifions d'abord quatre propriétés désirables qu'une solution idéale pour le débogage à distance doit présenter : l'interactivité, l'instrumentation, la distribution et la sécurité. L'interactivité est la capacité d'une solution de débogage à distance de mise à jour incrémentale de toutes les parties d'une application sans perdre le contexte de d'exécution (sans arrêter l'application). L'instrumentation est l'aptitude d'une solution de modifier la sémantique d'un processus en cours en vue d'aider le débogage. La distribution est la capacité d'une solution de débogage à adapter son cadre alors que le débogage d'une cible à distance. Enfin la sécurité fait référence à la disponibilité de conditions préalables pour l'authentification et la restriction d'accès. Compte tenu de ces propriétés, nous proposons Mercury, un modèle de débogage à distance et une architecture pour des langues réflexifs à objets. Mercury ouvre (1) l'interactivité grâce à un méta-niveau à distance miroir basé sur un lien de causalité avec sa cible, (2) l'instrumentation à travers une intercession réflective basée sur la réification de l'environnement d'exécution sous-jacent, (3) la distribution grâce à un middleware adaptable et (4) la sécurité par la décomposition et l'authentification de l'accès aux aspects réflexifs. Nous validons notre proposition à travers un prototype dans le langage de programmation Pharo à l'aide d'un cadre expérimental diversifié de multiples dispositifs contraints. Nous illustrons des techniques de débogage à distance supportées par les propriétés de Mercury, tels que le débogage agile distant et l'instrumentation objet à distance et montrons comment ils peuvent résoudre dans la pratique, les problèmes que nous avons identifiés.
612

A VPA-based Aspect Language

Nguyen, Dong Ha 21 October 2011 (has links)
This thesis focuses on the development of an advanced history-based aspect language and approaches to certain related issues ranging from applications to analysis methods. The aspect language, namely VPA-based Aspect Language, is defined upon visibly pushdown au- tomata (VPAs) [21]. This language is essentially an extension from an existing framework [47] of regular aspect languages. It features VPA-based pointcuts and provides, in particu- lar, constructors for the declarative definition of pointcuts based on regular and non-regular structures. We have also extended and developed the technique for detecting automatically potential interactions among VPA-based aspects. Despite several advantages of the class of visibly pushdown automata, there has been no practical support for them available. Therefore, we have realized a library called VPAlib that provides the implementation of essential data structures and operations for the VPA. This library is essential to enable the construction and analysis of VPA-based aspects. For instance, we have successfully performed certain analysis for detecting interactions among aspects using this library. In order to motivate the use of VPA-based aspects, we have studied two basic kinds of distributed applications, one representing typical systems with nested login sessions, and the other representing a grid computing system over peer-to-peer network. We have shown how VPA-based aspects can be useful for the realization of certain functionalities of these typical distributed applications. Thanks to their highly expressive pointcuts, another important application of VPA-based aspects is to define evolution on component-based systems, especially those with explicit component protocols. The use of aspects over component protocols, however, may break the coherence between the components of the system. We have further developed proof methods to establish the preservation of fundamental correctness properties, such as compatibility and substitutability relations between software components after the application of VPA-based aspects. Finally, we have considered the use of model checking techniques to verify systems that are modified by aspects. The goal of the verification is to check whether an aspect violates the global properties of a base system or the properties of other aspects. We have chosen the approach in which we create an abstract model from the VPA model and then run a model checker that is capable of checking the abstract model against the properties. We formally define the abstraction process and demonstrate our model checking approach via examples.
613

Application-Level Virtual Memory for Object-Oriented Systems

Martinez Peck, Mariano 29 October 2012 (has links) (PDF)
Lors de l'exécution des applications à base d'objets, plusieurs millions d'objets peuvent être créés, utilisés et enfin détruits s'ils ne sont plus référencés. Néanmoins, des dysfonc- tionnements peuvent apparaître, quand des objets qui ne sont plus utilisés ne peuvent être détruits car ils sont référencés. De tels objets gaspillent la mémoire principale et les ap- plications utilisent donc davantage de mémoire que ce qui est effectivement requis. Nous affirmons que l'utilisation du gestionnaire de mémoire virtuel du système d'exploitation ne convient pas toujours, car ce dernier est totalement isolé des applications. Le système d'exploitation ne peut pas prendre en compte ni le domaine ni la structure des applications. De plus, les applications n'ont aucun moyen de contrôler ou influencer la gestion de la mémoire virtuelle. Dans cette thèse, nous présentons Marea, un gestionnaire de mémoire virtuelle piloté par les applications à base d'objets. Il constitue une solution originale qui permet aux développeurs de gérer la mémoire virtuelle au niveau applicatif. Les développeurs d'une application peuvent ordonner à notre système de libérer la mémoire principale en trans- férant les objets inutilisés, mais encore référencés vers une mémoire secondaire (telle qu'un disque dur). En plus de la description du modèle et des algorithmes sous-jacents à Marea, nous présentons notre implémentation dans le langage Pharo. Notre approche a été validée à la fois qualitativement et quantitativement. Ainsi, nous avons réalisés des expérimentations et des mesures sur des applications grandeur-nature pour montrer que Marea peut réduire l'empreinte mémoire de 25% et jusqu'à 40%.
614

Conception d'un noyau de vérification de preuves pour le λΠ-calcul modulo

Boespflug, Mathieu 18 January 2011 (has links) (PDF)
Ces dernières années ont vu l'émergence d'assistants interactifs de preuves riches en fonctionnalités et d'une grande maturité d'implémentation, ce qui a permis l'essor des grosses formalisations de résultats papier et la résolution de conjectures célèbres. Mais autant d'assistants de preuves reposent sur presque autant de logiques comme fondements théoriques. Cousineau et Dowek (2007) proposent le λΠ-calcul modulo comme un cadre universel cible pour tous ces environnement de démonstration. Nous montrons dans cette thèse comment ce formalisme particulièrement simple admet une implémentation d'un vérificateur de taille modeste mais pour autant modulaire et efficace, à la correction de laquelle on peut réduire la cohérence de systèmes tout entiers. <p> Un nombre croissant de preuves dépendent de calculs intensifs comme dans la preuve du théorème des quatre couleurs de Gonthier (2007). Les méthodologies telles que SSReflect et les outils attenants privilégient les preuves contenant de nombreux petits calculs plutôt que les preuves purement déductives. L'encodage de preuves provenant d'autres systèmes dans le λΠ-calcul modulo introduit d'autres calculs encore. Nous montrons comment gérer la taille de ces calculs en interprétant les preuves tout entières comme des programmes fonctionnels, que l'on peut compiler vers du code machine à l'aide de compilateurs standards et clé-en-main. Nous employons pour cela une variante non typée de la normalisation par évaluation (NbE), et montrons comment optimiser de précédentes formulation de celle-ci. <p> Au travers d'une seule petite modification à l'interprétation des termes de preuves, nous arrivons aussi à une représentation des preuves en syntaxe abstraite d'ordre supérieur (HOAS), qui admet naturellement un algorithme de typage sans aucun contexte de typage explicite. Nous généralisons cet algorithme à tous les systèmes de types purs (PTS). Nous observons que cet algorithme est une extension à un cadre avec types dépendants de l'algorithme de typage des assistants de preuves de la famille HOL. Cette observation nous amène à développer une architecture à la LCF pour une large classe de PTS, c'est à dire une architecture où tous les termes de preuves sont corrects par construction, a priori donc, et n'ont ainsi pas besoin d'être vérifié a posteriori. Nous prouvons formellement en Coq un théorème de correspondance entre les système de types sans contexte et leur pendant standard avec contexte explicite. Ces travaux jettent un pont entre deux lignées historiques d'assistants de preuves : la lignée issue de LCF à qui nous empruntons l'architecture du noyau, et celle issue de Automath, dont nous héritons la notion de types dépendants. <p> Les algorithmes présentés dans cette thèse sont au coeur d'un nouveau vérificateur de preuves appelé Dedukti et ont aussi été transférés vers un système plus mature : Coq. En collaboration avec Dénès, nous montrons comment étendre la NbE non typée pour gérer la syntaxe et les règles de réduction du calcul des constructions inductives (CIC). En collaboration avec Burel, nous généralisons des travaux précédents de Cousineau et Dowek (2007) sur l'encodage dans le λΠ-calcul modulo d'une large classe de PTS à des PTS avec types inductifs, motifs de filtrage et opérateurs de point fixe.
615

Type Systems for Distributed Programs: Components and Sessions

Dardha, Ornela 19 May 2014 (has links) (PDF)
Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their correctness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock-freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session π-calculus, into the standard typed π-calculus, in order to understand the expressive power of concurrent calculi with structured communication primitives and how they stand with respect to the standard typed concurrent calculi, namely (variants) of typed π-calculus. Then, we show how to derive in the session π-calculus basic properties, like type safety or complex ones, like progress, by encoding.
616

The development of a method to assist in the transformation from procedural languages to object oriented languages with specific reference to COBOL and JAVA

Wing, Jeanette Wendy January 2002 (has links)
Thesis (M.Tech.: Computer Studies)-Dept. of Computer Science, Durban Institute of Technology, 2002. / Computer programming has been a science for approximately 50 years. It this time there havebeen two major paradigm shifts that have taken place. The first was from “spaghetti code” to structured programs. The second paradigm shift is from procedural programs to object oriented programs. The change in paradigm involves a change in the way in which a problem is approached, can be solved, as well as a difference in the language that is used. The languages that were chosen to be studied, are COBOL and Java. These programming languages were identified as key languages, and the languages that software development are the most reliant on. COBOL, the procedural language for existing business systems, and Java the object oriented language, the most likely to be used for future development. To complete this study, both languages were studied in detail. The similarities and differences between the programming languages are discussed. Some key issues that a COBOL programmer has to keep in mind when moving to Java were identified.
617

Integrating models and simulations of continuous dynamic system behavior into SysML

Johnson, Thomas Alex 05 May 2008 (has links)
Contemporary systems engineering problems are becoming increasingly complex as they are handled by geographically distributed design teams, constrained by the objectives of multiple stakeholders, and inundated by large quantities of design information. According to the principles of model-based systems engineering (MBSE), engineers can effectively manage increasing complexity by replacing document-centric design methods with computerized, model-based approaches. In this thesis, modeling constructs from SysML and Modelica are integrated to improve support for MBSE. The Object Management Group has recently developed the Systems Modeling Language (OMG SysML ) to provide a comprehensive set constructs for modeling many common aspects of systems engineering problems (e.g. system requirements, structures, functions). Complementing these SysML constructs, the Modelica language has emerged as a standard for modeling the continuous dynamics (CD) of systems in terms of hybrid discrete- event and differential algebraic equation systems. The integration of SysML and Modelica is explored from three different perspectives: the definition of CD models in SysML; the use of graph transformations to automate the transformation of SysML CD models into Modelica models; and the integration of CD models and other SysML models (e.g. structural, requirements) through the depiction of simulation experiments and engineering analyses. Throughout the thesis, example models of a car suspension and a hydraulically-powered excavator are used for demonstration. The core result of this work is the provision of modeling abilities that do not exist independently in SysML or Modelica. These abilities allow systems engineers to prescribe necessary system analyses and relate them to stakeholder concerns and other system aspects. Moreover, this work provides a basis for model integration which can be generalized and re-specialized for integrating other modeling formalisms into SysML.
618

VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Johnston, Christopher Troy January 2009 (has links)
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
619

VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Johnston, Christopher Troy January 2009 (has links)
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
620

VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Johnston, Christopher Troy January 2009 (has links)
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.

Page generated in 0.0309 seconds