Spelling suggestions: "subject:"[een] PROGRAMMING LANGUAGES"" "subject:"[enn] PROGRAMMING LANGUAGES""
441 |
Vers des références de première classe comme infrastructure de sécurité dans les langages dynamiquement typésArnaud, Jean-Baptiste 18 February 2013 (has links) (PDF)
Les langages de programmation orientés-objet dynamiquement typés ne peuvent pas fournir d'informations de type avant l'exécution. Deux de leurs principaux avantages sont qu'ils permettent le prototypage rapide á et l'intégration de modifications lors de l'exécution. La capacité des langages dynamiquement typés accepter les changements du programme lors de son exécution et en l'absence d'informations de type, condamne les approches de sécurité classiques á l'échec. Contrôler les références des objets et des graphes d'objets est indispensable pour construire des systèmes sécurisés. Les approches existantes sont généralement basées sur un système de type statique et ne peuvent pas être appliquées aux langages dynamiquement typés. Cette thèse défend que: Dans le contexte des langages de programmation orientés-objet dynamiquement typés, réifier les références, contrôler leur comportement, et isoler l'état des objets par le biais de telles références, est un moyen pratique de contrôler les références. Cette thèse apporte cinq contributions: Nous proposons la notion de dynamic read-only objects (DRO) comme un changement particulier (read-only) de comportement au niveau des références; Nous généralisons le modèle DRO pour permettre des changements de comportement plus génériques et nous étendons l'environnement de programmation et le langage Pharo avec des Handles, qui sont des références avec la possibilité de changer le comportement des objets référencés.; Nous définissons le terme de Metahandle pour offrir flexibilité et adaptabilité aux références contrôlées; Nous proposons la notion de SHandle, pour isoler les effets de bord au niveau des références; Et enfin, nous décrivons formellement les modèles Handle et SHandle pour représenter et expliquer leur sémantique. Comme validation de notre thèse nous avons mis en place trois approches liées a la securité en utilisant nos modeless. En outre, nous avons étendu la machine virtuelle Pharo pour supporter les Handles, Metahandles et SHandles.
|
442 |
Completeness of Fact Extractors and a New Approach to Extraction with Emphasis on the Refers-to RelationLin, Yuan 07 August 2008 (has links)
This thesis deals with fact extraction, which analyzes source code (and sometimes related artifacts) to produce extracted facts about the code. These facts may, for example, record where in the code variables are declared and where they are used, as well as related information. These extracted facts are typically used in software reverse engineering to reconstruct the design of the program.
This thesis has two main parts, each of which deals with a formal approach to fact extraction. Part 1 of the thesis deals with the question: How can we demonstrate that a fact extractor actually does its job? That is, does the extractor produce the facts that it is supposed to produce? This thesis builds on the concept of semantic completeness of a fact extractor, as defined by Tom Dean et al, and further defines source, syntax and compiler completeness. One of the contributions of this thesis is to show that in particular important cases (when the extractor is deterministic and its front end is idempotent), there is an efficient algorithm to determine if the extractor is compiler complete. This result is surprising, considering that in general it is undecidable if two programs are semantically equivalent, and it would seem that source code and its corresponding extracted facts are each essentially programs that are to be proved to be equivalent or at least sufficiently similar.
The larger part of the thesis, Part 2, presents Algebraic Refers-to Analysis (ARA), a new approach to fact extraction with emphasis on the Refers-to relation. ARA provides a framework for specifying fact extraction, based on a three-step pipeline: (1) basic (lexical and syntactic) extraction, (2) a normalization step and (3) a binding step.
For practical programming languages, these three steps are repeated, in stages and phases, until the Refers-to relation is computed. During the writing of this thesis, ARA pipelines for C, Java, C++, Fortran, Pascal and Ada have been designed. A prototype fact extractor for the C language has been created.
Validating ARA means to demonstrate that ARA pipelines satisfy the programming language standards such as ISO C++ standard. In other words, we show that ARA phases (stages and formulas) are correctly transcribed from the rules in the language standard.
Comparing with the existing approaches such as Attribute Grammar, ARA has the following advantages. First, ARA formulas are concise, elegant and more importantly, insightful. As a result, we have some interesting discovery about the programming languages. Second, ARA is validated based on set theory and relational algebra, which is more reliable than exhaustive testing. Finally, ARA formulas are supported by existing software tools such as database management systems and relational calculators.
Overall, the contributions of this thesis include 1) the invention of the concept of hierarchy of completeness and the automatic testing of completeness, 2) the use of the relational data model in fact extraction, 3) the invention of Algebraic Refers-to Relation Analysis (ARA) and 4) the discovery of some interesting facts of programming languages.
|
443 |
Completeness of Fact Extractors and a New Approach to Extraction with Emphasis on the Refers-to RelationLin, Yuan 07 August 2008 (has links)
This thesis deals with fact extraction, which analyzes source code (and sometimes related artifacts) to produce extracted facts about the code. These facts may, for example, record where in the code variables are declared and where they are used, as well as related information. These extracted facts are typically used in software reverse engineering to reconstruct the design of the program.
This thesis has two main parts, each of which deals with a formal approach to fact extraction. Part 1 of the thesis deals with the question: How can we demonstrate that a fact extractor actually does its job? That is, does the extractor produce the facts that it is supposed to produce? This thesis builds on the concept of semantic completeness of a fact extractor, as defined by Tom Dean et al, and further defines source, syntax and compiler completeness. One of the contributions of this thesis is to show that in particular important cases (when the extractor is deterministic and its front end is idempotent), there is an efficient algorithm to determine if the extractor is compiler complete. This result is surprising, considering that in general it is undecidable if two programs are semantically equivalent, and it would seem that source code and its corresponding extracted facts are each essentially programs that are to be proved to be equivalent or at least sufficiently similar.
The larger part of the thesis, Part 2, presents Algebraic Refers-to Analysis (ARA), a new approach to fact extraction with emphasis on the Refers-to relation. ARA provides a framework for specifying fact extraction, based on a three-step pipeline: (1) basic (lexical and syntactic) extraction, (2) a normalization step and (3) a binding step.
For practical programming languages, these three steps are repeated, in stages and phases, until the Refers-to relation is computed. During the writing of this thesis, ARA pipelines for C, Java, C++, Fortran, Pascal and Ada have been designed. A prototype fact extractor for the C language has been created.
Validating ARA means to demonstrate that ARA pipelines satisfy the programming language standards such as ISO C++ standard. In other words, we show that ARA phases (stages and formulas) are correctly transcribed from the rules in the language standard.
Comparing with the existing approaches such as Attribute Grammar, ARA has the following advantages. First, ARA formulas are concise, elegant and more importantly, insightful. As a result, we have some interesting discovery about the programming languages. Second, ARA is validated based on set theory and relational algebra, which is more reliable than exhaustive testing. Finally, ARA formulas are supported by existing software tools such as database management systems and relational calculators.
Overall, the contributions of this thesis include 1) the invention of the concept of hierarchy of completeness and the automatic testing of completeness, 2) the use of the relational data model in fact extraction, 3) the invention of Algebraic Refers-to Relation Analysis (ARA) and 4) the discovery of some interesting facts of programming languages.
|
444 |
Environment Analysis of Higher-Order LanguagesMight, Matthew Brendon 29 June 2007 (has links)
Any analysis of higher-order languages must grapple with the
tri-facetted nature of lambda. In one construct, the fundamental
control, environment and data structures of a language meet and
intertwine. With the control facet tamed nearly two decades ago, this
work brings the environment facet to heel, defining the environment
problem and developing its solution: environment analysis. Environment
analysis allows a compiler to reason about the equivalence of
environments, i.e., name-to-value mappings, that arise during a
program's execution. In this dissertation, two different
techniques-abstract counting and abstract frame strings-make this
possible. A third technique, abstract garbage collection, makes both
of these techniques more precise and, counter to intuition, often
faster as well. An array of optimizations and even deeper analyses
which depend upon environment analysis provide motivation for this
work.
In an abstract interpretation, a single abstract entity represents a
set of concrete entities. When the entities under scrutiny are
bindings-single name-to-value mappings, the atoms of environment-then
determining when the equality of two abstract bindings infers the
equality of their concrete counterparts is the crux of environment
analysis. Abstract counting does this by tracking the size of
represented sets, looking for singletons, in order to apply the
following principle:
If {x} = {y}, then x = y.
Abstract frame strings enable environmental reasoning by statically
tracking the possible stack change between the births of two
environments; when this change is effectively empty, the environments
are equivalent. Abstract garbage collection improves precision by
intermittently removing unreachable environment structure during
abstract interpretation.
|
445 |
Rollen und Kollaborationen in Scala / Roles and Collaborations in ScalaPradel, Michael 26 June 2008 (has links) (PDF)
The interrelations of a set of software objects are usually manifold and complex. Common object-oriented programming languages provide constructs for structuring objects according to shared properties and behavior, but fail to provide abstraction mechanisms for the interactions of objects. Roles seem to be a promising approach to solve this problem as they focus on the behavior of an object in a certain context. Combining multiple roles yields collaborations, an interesting abstraction and reuse unit. However, existing approaches towards roles in programming languages require vast extensions of the underlying language or even propose new languages. We propose a programming technique that enables role-based programming with commonly available language constructs. Thus, programmers can express roles and collaborations by simply using a library, and hence, without the need to change the language, its compiler, and its tools. We explain our proposal on a language-independent level. Moreover, we provide an implementation in form of a library for the Scala programming language. Finally, we apply our ideas to design patterns and analyze to which extent these can be expressed and reused with roles. / Die Zusammenhänge zwischen Softwareobjekten sind vielfältig und komplex. In den meisten objektorientierten Programmiersprachen werden Objekte an Hand von gemeinsamen Eigenschaften und Verhalten klassifiziert. Konstrukte zum Strukturieren bezüglich ihrer Interaktionen fehlen jedoch. Ein vielversprechender Lösungsansatz sind Rollen, welche das Verhalten von Objekten in einem bestimmten Kontext beschreiben. Zusammenhängende Rollen können zu Kollaborationen abstrahiert werden. Diese sind insbesondere als wiederverwendbare Bausteine interessant. Allerdings verändern bisherige Ansätze zu rollenbasiertem Programmieren die zu Grunde liegende Sprache erheblich oder schlagen gar neue Sprachen vor. Im Gegensatz dazu zeigen wir eine Programmiermethode, die rollenbasiertes Programmieren mit üblichen Sprachkonstrukten ermöglicht. Somit können Rollen und Kollaborationen als Bibliothek bereitgestellt werden, also ohne Sprache, Compiler und Werkzeuge anpassen zu müssen. Wir erläutern unseren Ansatz zunächst sprachunabhängig. Desweiteren wird eine Implementierung als Bibliothek für die Scala Programmiersprache präsentiert. Als praktische Anwendung stellen wir Entwurfsmustern dar und überprüfen, inwiefern sich diese mit Rollen ausdrücken und wiederverwenden lassen.
|
446 |
Semantics-based change-merging of abstract data typesChadha, Vineet. January 2002 (has links)
Thesis (M.S.)--Mississippi State University. Department of Computer Science. / Title from title screen. Includes bibliographical references.
|
447 |
Dynamic software updates : a VM-centric approachSubramanian, Suriya 26 January 2011 (has links)
Because software systems are imperfect, developers are forced to fix bugs
and add new features. The common way of applying changes to a running
system is to stop the application or machine and restart with the new
version. Stopping and restarting causes a disruption in service that is at
best inconvenient and at worst causes revenue loss and compromises safety.
Dynamic software updating (DSU) addresses these problems by updating
programs while they execute. Prior DSU systems for managed languages like
Java and C# lack necessary functionality: they are inefficient and do not
support updates that occur commonly in practice.
This dissertation presents the design and implementation of Jvolve, a DSU
system for Java. Jvolve's combination of flexibility, safety, and
efficiency is a significant advance over prior approaches. Our key
contribution is the extension and integration of existing Virtual Machine
services with safe, flexible, and efficient dynamic updating
functionality. Our approach is flexible enough to support a large class of
updates, guarantees type-safety, and imposes no space or time overheads on
steady-state execution.
Jvolve supports many common updates. Users can add, delete, and change
existing classes. Changes may add or remove fields and methods, replace
existing ones, and change type signatures. Changes may occur at any level
of the class hierarchy. To initialize new fields and update existing ones,
Jvolve applies class and object transformer functions, the former for
static fields and the latter for object instance fields. These features
cover many updates seen in practice. Jvolve supports 20 of 22
updates to three open-source programs---Jetty web server, JavaEmailServer,
and CrossFTP server---based on actual releases occurring over a one to two
year period. This support is substantially more flexible than prior
systems.
Jvolve is safe. It relies on bytecode verification to statically type-check
updated classes. To avoid dynamic type errors due to the timing of an
update, Jvolve stops the executing threads at a DSU safe point and then
applies the update. DSU safe points are a subset of VM safe points, where
it is safe to perform garbage collection and thread scheduling. DSU safe
points further restrict the methods that may be on each thread's stack,
depending on the update. Restricted methods include updated methods for
code consistency and safety, and user-specified methods for semantic
safety. Jvolve installs return barriers and uses on-stack replacement to
speed up reaching a safe point when necessary. While Jvolve does not
guarantee that it will reach a DSU safe point, in our multithreaded
benchmarks it almost always does.
Jvolve includes a tool that automatically generates default object
transformers which initialize new and changed fields to default values and
retain values of unchanged fields in heap objects. If needed, programmers
may customize the default transformers. Jvolve is the first dynamic
updating system to extend the garbage collector to identify and transform
all object instances of updated types. This dissertation introduces the
concept of object-specific state transformers to repair application heap
state for certain classes of bugs that corrupt part of the heap, and a
novel methodology that employes dynamic analysis to automatically generate
these transformers. Jvolve's eager object transformation design and
implementation supports the widest class of updates to date.
Finally, Jvolve is efficient. It imposes no overhead during steady-state
execution. During an update, it imposes overheads to classloading and
garbage collection. After an update, the adaptive compilation system will
incrementally optimize the updated code in its usual fashion. Jvolve is the
first full-featured dynamic updating system that imposes no steady-state
overhead.
In summary, Jvolve is the most-featured, most flexible, safest, and
best-performing dynamic updating system for Java and marks a significant
step towards practical support for dynamic updates in managed language
virtual machines. / text
|
448 |
Un modèle de programmation intégrant classes, événements et aspectsNúñez, Angel 29 June 2011 (has links) (PDF)
Le paradigme de la programmation par objets (PPO) est devenu le paradigme de programmation le plus utilisé. La programmation événementielle (PE) et la programmation par aspects (PPA) complètent la PPO en comblant certaines de ses lacunes lors de la construction de logiciels complexes. Les applications actuelles combinent ainsi les trois paradigmes. Toutefois, la POO, la PE et la POA ne sont pas encore bien intégrées. Leurs concepts sous-jacents sont en général fournis sous la forme de constructions syntaxiques spécifiques malgré leurs points communs. Ce manque d'intégration et d'orthogonalité complique les logiciels car il réduit leur compréhensibilité et leur composabilité, et augmente le code d'infrastructure. Cette thèse propose une intégration de la PPO, de la PE et de la PPA conduisant à un modèle de programmation simple et régulier. Ce modèle intègre les notions de classe et d'aspect, les notions d'événement et de point de jonction, et les notions d'action, de méthode et de gestionnaire d'événements. Il réduit le nombre de constructions tout en gardant l'expressivité initiale et en offrant même des options de programmation supplémentaires. Nous avons conçu et mis en œuvre deux langages de programmation basés sur ce modèle : EJava et ECaesarJ. EJava est une extension de Java implémentant le modèle. Nous avons validé l'expressivité de ce langage par la mise en œuvre d'un éditeur graphique bien connu, JHotDraw, en réduisant le code d'infrastructure nécessaire et en améliorant sa conception. ECaesarJ est une extension de CaesarJ qui combine notre modèle avec de la composition de mixins et un support linguistique des machines à états. Cette combinaison a grandement facilité la mise en œuvre d'une application de maison intelligente, une étude de cas d'origine industrielle dans le domaine de la domotique.
|
449 |
Un interpréteur extensible pour le prototypage des langages d'aspectsAssaf, Ali 21 October 2011 (has links) (PDF)
L'intérêt de l'utilisation de différents langages d'aspects pour faire face à une variété de préoccupations transverses dans le développement de systèmes logiciels complexes est reconnu. Il faudrait être capable d'utiliser plusieurs de ces langages dans un seul logiciel donné. Cependant, d'une part la phase de développement d'un nouveau langage dédié capturant tous les patrons de programmation du domaine prend beaucoup de temps et, d'autre part, le concepteur doit gérer les interactions avec les autres langages quand ils sont utilisés simultanément. <br/> Dans cette thèse, nous introduisons un support pour le prototypage rapide et la composition des langages d'aspects, basé sur des interpréteurs. Nous partons d'un interpréteur d'un sous-ensemble de Java en étudiant et en définissant son extension modulaire afin de supporter la programmation par aspects en se basant sur une sémantique d'aspects partagée. Dans l'interpréteur d'aspects, nous avons implémenté des mécanismes communs aux langages d'aspects en laissant des trous à définir pour implémenter des langages d'aspects concrets. La puissance de cette approche est de permettre d'implémenter directement les langages à partir de leur sémantique. L'approche est validée par l'implémentation d'une version légère d'AspectJ. <br/> Pour appliquer la même approche et la même architecture à Java sans modifier son interpréteur (JVM), nous réutilisons AspectJ pour effectuer une première étape de tissage statique, qui est complétée par une deuxième étape de tissage dynamique, implémentée par une mince couche d'interprétation. C'est un exemple montrant l'intérêt qu'il peut y avoir à concilier interprétation et compilation. Des prototypes pour AspectJ, EAOP, COOL et des langages dédiés simples, valident notre approche. Nous montrons le caractère ouvert de notre implémentation d'AspectJ en décrivant deux extensions: la première permet l'ordonnancement dynamique des aspects, la deuxième propose des sémantiques alternatives pour les points de coupe. Les langages d'aspects implémentés avec notre approche peuvent être facilement composés. En outre, cette composition peut être personnalisée.
|
450 |
Efficient search-based strategies for polyhedral compilation : algorithms and experience in a production compilerTrifunovic, Konrad 04 July 2011 (has links) (PDF)
In order to take the performance advantages of the current multicore and heterogeneous architectures the compilers are required to perform more and more complex program transformations. The search space of the possible program optimizations is huge and unstructured. Selecting the best transformation and predicting the potential performance benefits of that transformation is the major problem in today's optimizing compilers. The promising approach to handling the program optimizations is to focus on the automatic loop optimizations expressed in the polyhedral model. The current approaches for optimizing programs in the polyhedral model broadly fall into two classes. The first class of the methods is based on the linear optimization of the analytical cost function. The second class is based on the exhaustive iterative search. While the first approach is fast, it can easily miss the optimal solution. The iterative approach is more precise, but its running time might be prohibitively expensive. In this thesis we present a novel search-based approach to program transformations in the polyhedral model. The new method combines the benefits - effectiveness and precision - of the current approaches, while it tries to minimize their drawbacks. Our approach is based on enumerating the evaluations of the precise, nonlinear performance predicting cost-function. The current practice is to use the polyhedral model in the context of source-to-source compilers. We have implemented our techniques in a GCC framework that is based on the low level three address code representation. We show that the chosen level of abstraction for the intermediate representation poses scalability challenges, and we show the ways to overcome those problems. On the other hand, it is shown that the low level IR abstraction opens new degrees of freedom that are beneficial for the search-based transformation strategies and for the polyhedral compilation in general.
|
Page generated in 0.1523 seconds