51 |
Μελέτη των διερμηνευτών (scripting languages) που χρησιμοποιούνται στο παγκόσμιο ιστό, το διαδίκτυο, την εικονική πραγματικότητα και σε σχετικές τεχνολογίες και ανάπτυξη πιλοτικού συστήματος δικτυοκεντρικού διερμηνευτήΨιστάκης, Ιωσήφ 08 February 2010 (has links)
Τα σύγχρονα υπολογιστικά περιβάλλοντα χαρακτηρίζονται από έντονη χρήση διερμηνευτών (scripting languages). Μεταξύ άλλων, διερμηνευτές χρησιμοποιούνται για τη διαχείριση συστήματων (bash, powershell, msi), την αυτοματοποίηση διαδικασιών σε εφαρμογές (Microsoft Office, Alias Maya, Sonar, AutoCad), τη δημιουργία ψηφιακών παιχνιδιών (UnrealScript, LUA) αλλά και για την ανάπτυξη εφαρμογών για τον Παγκόσμιο Ιστό.
Ειδικά για τη περίπτωση του Διαδικτύου, φαίνεται ότι η εξέλιξη του Παγκόσμιου Ιστού είναι συνυφασμένη με την παράλληλη εξέλιξη των διερμηνευτών: νέες γλώσσες δημιουργούνται ώστε να υποστηρίξουν νέες τεχνολογίες. Σήμερα, πολλές ετερογενείς τεχνολογίες αλληλεπιδρούν ώστε εξυπηρετητής και φυλλομετρητής να παράγουν και να παρουσιάσουν διαδραστικό περιεχόμενο στο χρήστη. Από τη μεριά του εξυπηρετητή, χρησιμοποιούνται γλώσσες όπως οι Php, Jsp και Asp.Net. Το περιεχόμενο που παράγεται παρουσιάζεται στο χρήστη με τη χρήση γλωσσών όπως οι html, Javascript και Actionscript.
Οι διερμηνευτές γίνονται όλο και πιο διαδεδομένοι, καθώς προσφέρουν ευελιξία στην ανάπτυξη του κώδικα, ανεξαρτησία από την υπολογιστική αρχιτεκτονική και μειωμένους χρόνους ανάπτυξης, αφού δεν απαιτείται μεταγλώττιση ενώ το συντακτικό είναι απλο¬ποιημένο. Για τη βελτίωση της απόδοσης, οι σύγχρονοι διερμηνευτές χρησιμοποιούν Just-in-time compilation τεχνικές: ουσιαστικά μετατρέπουν το πηγαίο κώδικα σε ενδιάμεσο κώδικα, άμεσα εκτελέσιμο από μια μηχανή εκτέλεσης (ιδεατή μηχανή). Αυτό δυσχεραίνει περαιτέρω το διαχωρισμό μεταξύ διερμηνευτή και μεταγλωττιστή. Λόγω των παραπάνω χαρακτηριστικών και κυρίως λόγω της ανεξαρτησίας τους από την αρχιτεκτονική, οι διερμηνευτές αποτελούν τη βάση για τη συντριπτική πλειοψηφία των Εφαρμογών Ιστού.
Εντούτοις, καθώς η πολυπλοκότητα των διαδικτυακών εφαρμογών αυξάνεται, η χρήση των υπαρχόντων τεχνολογιών επιβραδύνει σημαντικά την ανάπτυξη: Για μια απλή δυναμική ιστοσελίδα απαιτείται η συγγραφή κώδικα σε τουλάχιστον τρεις γλώσσες: Κάθε στοιχείο της εφαρμογής (βάση δεδομένων, εξυπηρετητής, πελάτης) προγραμματίζεται σε διαφορετικό γλωσσικό περιβάλλον. Ο προγραμματιστής καλείται να διαχειριστεί την επικοινωνία και αλληλεπίδραση των ετερογενών στοιχείων αυτών και να επιλύσει τυχόν ασυμβατότητες. Ένα μεγάλο μέρος του χρόνου ανάπτυξης μιας εφαρμογής καταναλώνεται στη διαχείριση αυτού του ετερογενούς συστήματος. Με την άφιξη νέων τεχνολογιών, όπως για παράδειγμα οι Ajax, Silverlight, JavaFX, η διαχείριση των εγγενών προβλημάτων του συστήματος αυτού δυσχεραίνεται ακόμη περισσότερο.
Στα πλαίσια της εργασίας αυτής διερευνώνται οι κυριότερες σύγχρονες τεχνολογίες διερμηνευτών για ανάπτυξη εφαρμογών, με έμφαση στις τεχνολογίες Ιστού. Κάθε γλώσσα αναλύεται ξεχωριστά και σχολιάζονται οι ιδιαιτερότητες και τα ειδικά χαρακτηριστικά της. Παράλληλα εντοπίζονται τα εγγενή προβλήματα ασυμβατότητας που συναντώνται στο υπάρχον μοντέλο ανάπτυξης και παρουσιάζεται ένα εναλλακτικό, ενοποιημένο μοντέλο ανάπτυξης διαδικτυακών εφαρμογών, το οποίο ομαδοποιεί και απλοποιεί τις προγραμματιστικές διαδικασίες των συστημάτων πελάτη και εξυπηρετητή. / Scripting languages find many applications in modern computing environments. Scripts are used in various scenarios, including system administration (bash, powershell, msi), job automation (Microsoft Office, Alias Maya, Sonar, AutoCad), logic programming in computer entertainment and of course in Internet Applications.
Internet technologies are the primary example application of scripts (interpreters in general): a connection between scripts and the development of the World Wide Web can be observed: new scripting systems arise to support newly developed technologies. In modern internet applications, different technologies co-exist and interact so that the server and the client can present interactive content to the end user. On the server side, technologies like Php, Jsp and As[.Net are used. The resulting content is presented on the client side with technologies such as html/xml, Javascript, Flex and Actionscript.
Scripting Languages (interpreters) are becoming increasingly popular, as they offer versatility, platform-independence and reduced development times, since they feature a simplified syntax and they do not require complex compilation procedures. To increase performance, modern Scripting systems feature Just-In-Time compilation techniques, effectively creating a compiled version of their input source code. This version of the code can be easily executed by the system’s Virtual Machine (Execution Engine). Such techniques blur the borders between Compilers and Interpreters. The features detailed above make interpreters the ideal solution for web application development, mainly because they are inherently cross-platform. Most of the available web-technologies expose an interpreted language system.
However, as the complexity in modern web-applications and related technologies increases, current scripting systems are becoming a bottleneck in the development process: To develop a proper dynamic web page, the programmer will be required to use at least three different languages: A language for accessing the data base (sql), a language to program the server side of the application (Php, Jsp, Asp) and a set of languages to present content to the end user (javascript). It is up to the programmer to orchestrate the various scripts and manage any incompatibilities arising, when using those independent systems. This hampers the development process, as extra effort is taken to manage the programming environment rather than actually develop the program. With new technologies, like Silverlight, Ajax and Flex arising, managing inherent incompatibilities becomes even more difficult.
To tackle the increased development complexity, a new web application development paradigm is explored and the features of the corresponding language are detailed, as is a simple implementation scheme. A study of available mainstream scripting languages, with a focus in Web development, is also presented. Each language is presented with a description of its key features and syntax and a comparison with similar development systems.
|
52 |
A generic framework facilitating automated quality assurance across programming languages of disparate paradigmsOwens, Darryl January 2016 (has links)
This research aims to outline a framework based on procedural and object-oriented Paradigms that facilitates generic automated quality assurance. Along with the outline, a skeleton framework has been developed to evaluate the research, and the final aim is to expand the footprint of the framework; theoretical inclusion of other programming paradigms has been discussed. This research developed a taxonomy of quality assurance techniques in order to identify potential candidates for generic quality assurance and also to minimise experimental requirements, as the taxonomy categories are generated based on implementation requirements; this means that a category can be deemed feasible within the scope of this framework if a single technique can be implemented. The novel aspects of this research are the taxonomy, paradigm-specific framework, and finally the theorised paradigm-generic framework. An experimental method has been used to provide evidence to support the claims made by this research, which is accompanied by a study of literature providing a foundation for all areas discussed. Although a paradigm-generic framework can be achieved, the internal representation used in this research showed that application of the logical paradigm would not be simple and has little benefit in the scope of automated quality assurance. This being said, procedural, object-oriented, and functional paradigms have been demonstrated as feasible with significant impact on programming language development and automated quality assurance of software.
|
53 |
Enabling high quality executable domain specific language specificationLai, Qinan January 2015 (has links)
Domain Specific Languages (DSL) are becoming a common practice for describing models at a higher abstraction, using a notation that domain experts understand. Designing a DSL usually starts from creating a language specification, and the other tools of the DSLs are derived from the specification. Hence, the quality of the language specification can crucially impact the quality of the complete DSL tool chain. Although many methods for defining a language specification have been proposed, the quality of the language specification they produced is not emphasised. This thesis explores the quality of language specifications, and proposes consistency, correctness, executability, understandability, and interoperability as the key features that a high quality language specification processes. Given the importance of these features, this thesis designs a new language definition approach that is based on the newly published OMG standards, namely: the semantics of the foundational subset of UML (fUML), and the Action Language for fUML (ALF). This approach enables the creation of a language specification with the proposed criteria. Moreover, a software framework that simplifies the production of high quality language specifications is built. Finally, a software development process is developed, which analyses the roles, products, and activities in DSL specification development. The framework is demonstrated by defining the language specification of Business Process Execution Language (BPEL) as a case study. The BPEL specification is further evaluated, which confirms the desired quality features are processed.
|
54 |
Improving market risk management with heuristic algorithmsKleinknecht, Manuel January 2017 (has links)
Recent changes in the regulatory framework for banking supervision increase the regulatory oversight and minimum capital requirements for financial institutions. In this thesis, we research active portfolio optimisation techniques with heuristic algorithms to manage new regulatory challenges faced in risk management. We first study if heuristic algorithms can support risk management to find global optimal solutions to reduce the regulatory capital requirements. In a benchmark comparison of variance, Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) objective functions combined with different optimisation routines, we show that the Threshold Accepting (TA) heuristic algorithm reduces the capital requirements compared with the Trust-Region (TR) local search algorithm. Secondly, we introduce a new risk management approach based on the Unconditional Coverage test to optimally manage the regulatory capital requirements, while avoiding to over- or underestimate the portfolio risk. In an empirical analysis with TA and TR optimisation, we show that our new approach successfully optimises the portfolio risk-return profile and reduces the capital requirements. Next, we analyse the effect of different estimation techniques on the capital requirements. More specifically, empirical and analytical VaR and CVaR estimation is compared with a simulation-based approach using a multivariate GARCH process. The optimisation is performed using the Population-Based Incremental Learning (PBIL) algorithm. We find that the parametric and empirical distribution assumption generate similar results and neither of them clearly outperforms the other. However, portfolios optimised with the simulation approach reduce the capital requirements by about 11%. Finally, we introduce a global VaR and CVaR hedging approach with multivariate GARCH process and PBIL optimisation. Our hedging framework provides a self-financing hedge that reduces transaction costs by using standardised derivatives. The empirical study shows that the new approach increases the stability of the portfolio while avoiding high transaction costs. The results are compared with benchmark portfolios optimised with a Genetic Algorithm.
|
55 |
A formal verification approach to process modelling and compositionPapapanagiotou, Petros January 2014 (has links)
Process modelling is a design approach where a system or procedure is decomposed in a number of abstract, independent, but connected processes, and then recomposed into a well-defined workflow specification. Research in formal verification, for its part, and theorem proving in particular, is focused on the rigorous verification of system properties using logical proof. This thesis introduces a systematic methodology for process modelling and composition based on formal verification. Our aim is to augment the numerous benefits of a workflow based specification, such as modularity, separation of concerns, interoperability between heterogeneous (including human-based) components, and optimisation, with the high level of trust provided by formally verified properties, such as type correctness, systematic resource accounting (including exception handling), and deadlock-freedom. More specifically, we focus on bridging the gap between the deeply theoretical proofs-as-processes paradigm and the highly pragmatic tasks of process specification and composition. To accomplish this, we embed the proofs-as-processes paradigm within the modern proof assistant HOL Light. This allows the formal, mechanical translation of Classical Linear Logic (CLL) proofs to p-calculus processes. Our methodology then relies on the specification of abstract processes in CLL terms and their composition using CLL inference. A fully diagrammatic interface is used to guide our developed set of high level, semi-automated reasoning tools, and to perform intuitive composition actions including sequential, parallel, and conditional composition. The end result is a p-calculus specification of the constructed workflow, with guarantees of correctness for the aforementioned properties. We can then apply a visual, step-by-step simulation of this workflow or perform an automated workflow deployment as executable code in the programming language Scala. We apply our methodology to a use-case of a holiday booking web agent and to the modelling of real-world collaboration patterns in healthcare, thus demonstrating the capabilities of our framework and its potential use in a variety of scenarios.
|
56 |
The scalability of reliable computation in ErlangGhaffari, Amir January 2015 (has links)
With the advent of many-core architectures, scalability is a key property for programming languages. Actor-based frameworks like Erlang are fundamentally scalable, but in practice they have some scalability limitations. The RELEASE project aims to scale the Erlang's radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on emergent commodity architectures with 10,000 cores. The RELEASE consortium works to scale Erlang at the virtual machine, language level, infrastructure levels, and to supply profiling and refactoring tools. This research contributes to the RELEASE project at the language level. Firstly, we study the provision of scalable persistent storage options for Erlang. We articulate the requirements for scalable and available persistent storage, and evaluate four popular Erlang DBMSs against these requirements. We investigate the scalability limits of the Riak NoSQL DBMS using Basho Bench up to 100 nodes on the Kalkyl cluster and establish for the first time scientifically the scalability limit of Riak as 60 nodes, thereby confirming developer folklore. We design and implement DE-Bench, a scalable fault-tolerant peer-to-peer benchmarking tool that measures the throughput and latency of distributed Erlang commands on a cluster of Erlang nodes. We employ DE-Bench to investigate the scalability limits of distributed Erlang on up to 150 nodes and 1200 cores. Our results demonstrate that the frequency of global commands limits the scalability of distributed Erlang. We also show that distributed Erlang scales linearly up to 150 nodes and 1200 cores with relatively heavy data and computation loads when no global commands are used. As part of the RELEASE project, the Glasgow University team has developed Scalable Distributed Erlang (SD Erlang) to address the scalability limits of distributed Erlang. We evaluate SD Erlang by designing and implementing the first ever demonstrators for SD Erlang, i.e. DE-Bench, Orbit and Ant Colony Optimisation(ACO). We employ DE-Bench to evaluate the performance and scalability of group operations in SD-Erlang up to 100 nodes. Our results show that the alternatives SD-Erlang offers for global commands (i.e. group commands) scale linearly up to 100 nodes. We also develop and evaluate an SD-Erlang implementation of Orbit, a symbolic computing kernel and a generalization of a transitive closure computation. Our evaluation results show that SD Erlang Orbit outperforms the distributed Erlang Orbit on 160 nodes and 1280 cores. Moreover, we develop a reliable distributed version of ACO and show that the reliability of ACO limits its scalability in traditional distributed Erlang. We use SD-Erlang to improve the scalability of the reliable ACO by eliminating global commands and avoiding full mesh connectivity between nodes. We show that SD Erlang reduces the network traffic between nodes in an Erlang cluster effectively.
|
57 |
An information theoretic approach to the expressiveness of programming languagesDavidson, Joseph Ray January 2016 (has links)
The conciseness conjecture is a longstanding notion in computer science that programming languages with more built-in operators, that is more expressive languages with larger semantics, produce smaller programs on average. Chaitin defines the related concept of an elegant program such that there is no smaller program in some language which, when run, produces the same output. This thesis investigates the conciseness conjecture in an empirical manner. Influenced by the concept of elegant programs, we investigate several models of computation, and implement a set of functions in each programming model. The programming models are Turing Machines, λ-Calculus, SKI, RASP, RASP2, and RASP3. The information content of the programs and models are measured as characters. They are compared to investigate hypotheses relating to how the mean program size changes as the size of the semantics change, and how the relationship of mean program sizes between two models compares to that between the sizes of their semantics. We show that the amount of information present in models of the same paradigm, or model family, is a good indication of relative expressivity and average program size. Models that contain more information in their semantics have smaller average programs for the set of tested functions. In contrast, the relative expressiveness of models from differing paradigms, is not indicated by their relative information contents. RASP and Turing Machines have been implemented as Field Programmable Gate Array (FPGA) circuits to investigate hardware analogues of the hypotheses above. Namely that the amount of information in the semantics for a model directly influences the size of the corresponding circuit, and that the relationship of mean circuit sizes between models is comparable to the relationship of mean program sizes. We show that the number of components in the circuits that realise the semantics and programs of the models correlates with the information required to implement the semantics and program of a model. However, the number of components to implement a program in a circuit for one model does not relate to the number of components implementing the same program in another model. This is in contrast to the more abstract implementations of the programs. Information is a computational resource and therefore follows the rules of Blum’s axioms. These axioms and the speedup theorem are used to obtain an alternate proof of the undecidability of elegance. This work is a step towards unifying the formal notion of expressiveness with the notion of algorithmic information theory and exposes a number of interesting research directions. A start has been made on integrating the results of the thesis with the formal framework for the expressiveness of programming languages.
|
58 |
Normalization and learning of transducers on trees and words / Normalisation et apprentissage de transducteurs d’arbres et de motsBoiret, Adrien 07 November 2016 (has links)
Le développement du Web a motivé l’apparition de nombreux types de formats de données semi-structurées pour les problèmes liés aux technologies du Web, comme le traitement des documents ou la gestion de base de données.Nous étudions ici la conversion des données semi-structurées d’un schéma à un autre. Pour le traitement de documents, c’est la technologie XML qui offre la solution la plus puissante à ce problème. En XML, les données semi-structurée sont des arbres de données dont les schémas peuvent être définis par des automates d’arbres avec contraintes sur les valeurs de données. Les transformations de documents sont spécifiées en XSLT, un langage fonctionnel muni de requêtes logiques XPath. Le cœur de XSLT correspond aux transducteurs d’arbres à macros avec navigation par requêtes XPath.Nous proposons de nouveaux algorithmes pour l’apprentissage des transducteurs d’arbres, basés sur des méthodes d’inférence grammaticale. Nous abordons la restriction de schéma, l’anticipation (lookahead), ou la concaténation dans la sortie.1. Nous donnons une forme normale et un algorithme d’apprentissage dans le modèle de Gold avec des ressources limitées pour les transducteurs d’arbres de haut en bas déterministes avec une inspection de domaine régulière.2. Nous montrons comment apprendre des fonctions rationnelles, décrites par les transducteurs de mots déterministes avec anticipation. Nous proposons une nouvelle forme normale qui permet un apprentissage avec des ressources polynomiales.3. Pour les transducteurs arbre-vers-mot linéaires, qui permet la concaténation dans sa sortie, nous présentons une forme normale, et montrons comment décider l’équivalence en temps polynomial. / Since the arrival of the Web, various kinds of semi-structured data formats were introduced in the areas of computer science and technology relevant for the Web, such as document processing, database management, knowledge representation, and information exchange. In this thesis, we study the conversion of semi-structured data from one schema to another.For document processing, the most powerful solutions to this problem were proposed by the XML technology. In the XML format, semi-structured data is restricted to data trees, so that schemas can be defined by tree automata, possibly enhanced by constraints on data values. Document transformations can be defined in XSLT, a purely functional programming language with logical XPath queries. The core of XSLT are macros tree transducers with navigation by XPath queries.We contribute new learning algorithms on tree transducers, that are based on methods from grammatical inference. We address three limitiations of previous approaches: schema restrictions, lookaheads, and concatenation in the output.1. For deterministic top-down tree transducers with regular domain inspection, we show a normal form and a Gold style learning algorithm in with limited resources.2. We show how to learn rational functions, described by deterministic transducers of words with lookahead. We propose a new normal form for such transducers which provides a compromise between lookahead and state minimization, that leads to a learning algorithm in Gold’s learning model with polynomial resources.3. For linear tree-to-word transducers with concatenation in the output, we present a normal form and show how to decide equivalence in polynomial time.
|
59 |
Entwicklung und Evaluierung eines Rahmenkonzepts zum Programmierenlernen an HochschulenRingel, Robert 20 August 2024 (has links)
Entwicklung und Erprobung eines Rahmenkonzepts zum Programmierenlernen an Hochschulen. Das Programmierenlernen ist bereits seit den 1980er Jahren Gegenstand der allgemeinen Hochschullehre. Dennoch besitzt es auch nach über vierzig Jahren eine hohe Aktualität für die Forschung im Bereich des Lehrens und Lernens. Robins (2019) gibt einen umfassenden Überblick zum aktuellen Stand der Forschung auf diesem Gebiet. Darin wird deutlich, dass es unter den Lehrenden für Grundlagenkurse zum Programmierenlernen kaum Übereinstimmung bezüglich der Methodik, zu allgemein akzeptierten Lehrinhalten und zur Diagnostik der Erreichung von Lernzielen gibt. Konsenz besteht darin, dass für das Programmierenlernen komplexe kognitive Anforderungen in den Teilbereichen des Programmverstehens und des Programmschreibens bestehen, da beim Programmieren dynamische Abläufe generischer Problemlöseprozesse mit den Mitteln formaler Programmiersprachen zu beschreiben sind. Das Ziel dieser Arbeit besteht darin, ein lernpsychologisch fundiertes Rahmenmodell zum Programmierenlernen zu entwerfen und dieses im realen Lehrbetrieb an einer Hochschule zu erproben.:1. Problemstellung und Ziele
2. Programmierenlernen unter Nutzung einer aufgabenbasierten Methodik
2.1. Die Spezifika des Programmierenlernens
2.2. Methodiken aufgabenbasierten Lernens
2.3. Lernhandlungen zum Programmierenlernen
3. Die Gestaltung von Aufgabenfolgen zum Programmierenlernen
3.1. Strukturierung der Lernziele
3.2. Entwicklung von Lernaufgaben
3.3. Dokumentation von Aufgabensets
3.4. Implementierung der Aufgabenfolgen mit Hilfe von Jupyter Notebook
3.5. Diagnose des Lernfortschritts unter Nutzung eines kognitiven Modells zur Abbildung der Kompetenzaspekte
4. Begleitende Untersuchungen des Programmierkurses im Lehrbetrieb der Hochschule
4.1. Machbarkeitsstudie zur Erfassung der Programmierkompetenz unter Nutzung des 5-S-Modells
4.2. Machbarkeitsstudie zur Zuverlässigkeit der angeleiteten Selbstbewertung von Lernstandserhebungen
4.3. Machbarkeitsstudie zur Erfassung der produktiven Grundeinstellung unter Nutzung von Skalen aus Bildungsstudien
4.4. Ergebnisse der Lernstandserhebungen im Kontext der Lehr-Lern-Situation
4.5. Interviews mit Studierenden und Lehrkräften als Retrospektive zur Lehrveranstaltung
4.6. Zusammenfassung
5. Zusammenfassung und Ausblick
5.1. Das kombinierte Rahmenmodell zum Programmierenlernen
5.2. Gestaltung von Aufgabenfolgen zum Programmierenlernen
5.3. Modellbasierte Diagnose des Lernstandes als Bestandteil eines Feedback-Prozesses
5.4. Themen weiterführender Forschung
5.5. Reflektion dieser Arbeit unter Nutzung der IOOI-Methode
5.6. Fazit
|
60 |
Algebraic theory of type-and-effect systemsKammar, Ohad January 2014 (has links)
We present a general semantic account of Gifford-style type-and-effect systems. These type systems provide lightweight static analyses annotating program phrases with the sets of possible computational effects they may cause, such as memory access and modification, exception raising, and non-deterministic choice. The analyses are used, for example, to justify the program transformations typically used in optimising compilers, such as code reordering and inlining. Despite their existence for over two decades, there is no prior comprehensive theory of type-and-effect systems accounting for their syntax and semantics, and justifying their use in effect-dependent program transformation. We achieve this generality by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects can be identified with the effect operations. Our first main contribution is the uniform construction of semantic models for typeand- effect analysis by a process we call conservative restriction. Our construction requires an algebraic model of the unannotated programming language and a relevant notion of predicate. It then generates a model for Gifford-style type-and-effect analysis. This uniform construction subsumes existing ad-hoc models for type-and-effect systems, and is applicable in all cases in which the semantics can be given via enriched Lawvere theories. Our second main contribution is a demonstration that our theory accounts for the various aspects of Gifford-style effect systems. We begin with a version of Levy’s Callby- push-value that includes algebraic effects. We add effect annotations, and design a general type-and-effect system for such call-by-push-value variants. The annotated language can be thought of as an intermediate representation used for program optimisation. We relate the unannotated semantics to the conservative restriction semantics, and establish the soundness of program transformations based on this effect analysis. We develop and classify a range of validated transformations, generalising many existing ones and adding some new ones. We also give modularly-checkable sufficient conditions for the validity of these optimisations. In the final part of this thesis, we demonstrate our theory by analysing a simple example language involving global state with multiple regions, exceptions, and nondeterminism. We give decision procedures for the applicability of the various effect-dependent transformations, and establish their soundness and completeness.
|
Page generated in 0.1538 seconds