Spelling suggestions: "subject:"lazy evaluation"" "subject:"hazy evaluation""
1 |
Left-Incompatible Term Rewriting Systems and Functional StrategySAKAI, Masahiko 12 1900 (has links)
No description available.
|
2 |
Performance assessment of Apache Spark applicationsAL Jorani, Salam January 2019 (has links)
This thesis addresses the challenges of large software and data-intensive systems. We will discuss a Big Data software that consists of quite a bit of Linux configuration, some Scala coding and a set of frameworks that work together to achieve the smooth performance of the system. Moreover, the thesis focuses on the Apache Spark framework and the challenging of measuring the lazy evaluation of the transformation operations of Spark. Investigating the challenges are essential for the performance engineers to increase their ability to study how the system behaves and take decisions in early design iteration. Thus, we made some experiments and measurements to achieve this goal. In addition to that, and after analyzing the result we could create a formula that will be useful for the engineers to predict the performance of the system in production.
|
3 |
Efektivní funkcionální knihovna pro konečné automaty / An Efficient Functional Library for Finite AutomataŘíha, Jakub January 2017 (has links)
Finite automata are an important mathematical abstraction, and in formal verification, they are used for a concise representation of regular languages. Operations often used on finite automata in this setting are testing their universality and language inclusion. \mbox{A naive} approach to implement these operations leads to an explicit determinization of the automata, which can be costly and undesirable. There is, however, a more advanced method for performing those operations, called the Antichains algorithm, which avoids such an explicit determinization. This work shows how finite automata operations can be effectively implemented in Haskell and compares several approaches of their implementation. The obtained results are compared with VATA, an imperative implementation of a finite automata library.
|
4 |
Eager, Lazy, and Other Executions for Predicative ProgrammingLai, Yu Cheong Albert 08 August 2013 (has links)
Many programs are executed according to the conventional, eager execution order, for which verification of execution costs is well-understood. However, there are other execution orders in use. One such order in common use is lazy execution or lazy evaluation, which is mostly demand-driven. Laziness supports better decompositions of algorithms, e.g., into modular producers and consumers, which enables compositional reasoning of answer correctness, but then timing correctness is more elusive. This thesis gives a formal method for verifying lazy timing, compositional with respect to program structure; it is an extension of a predicative programming theory.
Predicative programming theories are formal methods that unify both specifications and programs as predicates or boolean-typed expressions over memory state and other quantities of interest. Their strengths are mathematical simplicity and support of program development and verification by incremental refinements. Among these theories, Hehner's a Practical Theory of Programming has the further strength of leaving termination and timing open rather than a built-in, and therefore is a flexible substrate for various timing schemes corresponding to various execution strategies. We use this substrate for our method for lazy timing.
This thesis also proves soundness of the eager timing scheme in Hehner's work with respect to an eager operational semantics, and our lazy timing scheme with respect to a lazy operational semantics. Thus, if refinements promise an upper time bound, then execution actually stops within that time.
Lastly, this thesis outlines a space of more operational semantics. It is possible ground for more execution strategies.
|
5 |
Eager, Lazy, and Other Executions for Predicative ProgrammingLai, Yu Cheong Albert 08 August 2013 (has links)
Many programs are executed according to the conventional, eager execution order, for which verification of execution costs is well-understood. However, there are other execution orders in use. One such order in common use is lazy execution or lazy evaluation, which is mostly demand-driven. Laziness supports better decompositions of algorithms, e.g., into modular producers and consumers, which enables compositional reasoning of answer correctness, but then timing correctness is more elusive. This thesis gives a formal method for verifying lazy timing, compositional with respect to program structure; it is an extension of a predicative programming theory.
Predicative programming theories are formal methods that unify both specifications and programs as predicates or boolean-typed expressions over memory state and other quantities of interest. Their strengths are mathematical simplicity and support of program development and verification by incremental refinements. Among these theories, Hehner's a Practical Theory of Programming has the further strength of leaving termination and timing open rather than a built-in, and therefore is a flexible substrate for various timing schemes corresponding to various execution strategies. We use this substrate for our method for lazy timing.
This thesis also proves soundness of the eager timing scheme in Hehner's work with respect to an eager operational semantics, and our lazy timing scheme with respect to a lazy operational semantics. Thus, if refinements promise an upper time bound, then execution actually stops within that time.
Lastly, this thesis outlines a space of more operational semantics. It is possible ground for more execution strategies.
|
6 |
The Provision of Non-Strictness, Higher Kinded Types and Higher Ranked Types on an Object Oriented Virtual MachineHunt, Oliver January 2007 (has links)
We discuss the development of a number of algorithms and techniques to allow object oriented virtual machines to support many of the features needed by functional and other higher level languages. These features include non-strict evaluation, partial function application, higher ranked and higher kinded types. To test the mechanisms that we have developed we have also produced a compiler to allow the functional language Haskell to be compiled to a native executable for the Common Language Runtime. This has allowed us to demonstrate that the techniques we have developed are practically viable.
|
7 |
Réalisabilité classique et effets de bord / Classical realizability and side effectsMiquey, Étienne 17 November 2017 (has links)
Cette thèse s'intéresse au contenu calculatoire des preuves classiques, et plus spécifiquement aux preuves avec effets de bord et à la réalisabilité classique de Krivine. Le manuscrit est divisé en trois parties, dont la première consiste en une introduction étendue des concepts utilisés par la suite. La seconde partie porte sur l’interprétation calculatoire de l’axiome du choix dépendant en logique classique. Ce travail s'inscrit dans la continuité du système dPAω d'Hugo Herbelin, qui permet d’adapter la preuve constructive de l’axiome du choix en théorie des types de Martin-Löf pour en faire une preuve constructive de l’axiome du choix dépendant dans un cadre compatible avec la logique classique. L'objectif principal de cette partie est de démontrer la propriété de normalisation pour dPAω, sur laquelle repose la cohérence du système. La difficulté d'une telle preuve est liée à la présence simultanée de types dépendants (pour la partie constructive du choix), d'opérateurs de contrôle (pour la logique classique), d'objets co-inductifs (pour "encoder" les fonctions de type N → A par des streams (a₀,a₁,...)) et d'évaluation paresseuse avec partage (pour ces objets co-inductifs). Ces difficultés sont étudiées séparément dans un premier temps. En particulier, on montre la normalisation du call-by-need classique (présenté comme une extension du λµµ̃-calcul avec des environnements partagé), en utilisant notamment des techniques de réalisabilité à la Krivine. On développe ensuite un calcul des séquents classique avec types dépendants, définie comme une adaptation du λµµ̃-calcul, dont la correction est prouvée à l'aide d'une traduction CPS tenant compte des dépendances. Enfin, une variante en calcul des séquents du système dPAω est introduite, combinant les deux points précédents, dont la normalisation est finalement prouvée à l'aide de techniques de réalisabilité. La dernière partie, d'avantage orientée vers la sémantique, porte sur l’étude de la dualité entre l’appel par nom (call-by-name) et l’appel par valeur (call-by-value) dans un cadre purement algébrique inspiré par les travaux autour de la réalisabilité classique (et notamment les algèbres de réalisabilité de Krivine). Ce travail se base sur une notion d'algèbres implicatives développée par Alexandre Miquel, une structure algébrique très simple généralisant à la fois les algèbres de Boole complètes et les algèbres de réalisabilité de Krivine, de manière à exprimer dans un même cadre la théorie du forcing (au sens de Cohen) et la théorie de la réalisabilité classique (au sens de Krivine). Le principal défaut de cette structure est qu’elle est très orientée vers le λ-calcul, et ne permet d’interpréter fidèlement que les langages en appel par nom. Pour remédier à cette situation, on introduit deux variantes des algèbres implicatives les algèbres disjonctives, centrées sur le “par” de la logique linéaire (mais dans un cadre non linéaire) et naturellement adaptées aux langages en appel par nom, et les algèbres conjonctives, centrées sur le “tenseur” de la logique linéaire et adaptées aux langages en appel par valeur. On prouve en particulier que les algèbres disjonctives ne sont que des cas particuliers d'algèbres implicatives et que l'on peut obtenir une algèbre conjonctive à partir d'une algèbre disjonctive (par renversement de l’ordre sous-jacent). De plus, on montre comment interpréter dans ces cadres les fragments du système L de Guillaume Munch-Maccagnoni en appel par valeur (dans les algèbres conjonctives) et en appel par nom (dans les algèbres disjonctives). / This thesis focuses on the computational content of classical proofs, and specifically on proofs with side-effects and Krivine classical realizability. The manuscript is divided in three parts, the first of which consists of a detailed introduction to the concepts used in the sequel.The second part deals with the computational content of the axiom of dependent choice in classical logic. This works is in the continuity of the system dPAω developed Hugo Herbelin. This calculus allows us to adapt the constructive proof of the axiom of choice in Martin-Löf's type theory in order to turn it into a constructive proof of the axiom of dependent choice in a setting compatible with classical logic. The principal goal of this part is to prove the property of normalization for dPAω, on which relies the consistency of the system. Such a proof is hard to obtain, due to the simultaneous presence of dependent types (for the constructive part of the choice), of control operators (for classical logic), of co-inductive objects (in order to "encode" functions of type N → A as streams (a₀,a₁,...)) and of lazy evaluation with sharing (for this co-inductive objects). These difficulties are first studied separately. In particular, we prove the normalization of classical call-by-need (presented as an extension of the λµ̃µ-calculus with shared environments) by means of realizability techniques. Next, we develop a classical sequent calculus with dependent types, defined again as an adaptation of the λµ̃µ-calculus, whose soundness is proved thanks to a CPS translation which takes the dependencies into account. Last, a sequent-calculus variant of dPAω is introduced, combining the two previous systems. Its normalization is finally proved using realizability techniques. The last part, more oriented towards semantics, studies the duality between the call-by-name and the call-by-value evaluation strategies in a purely algebraic setting, inspired from several works around classical realizability (and in particular Krivine realizability algebras). This work relies on the notion of implicative algebras developed by Alexandre Miquel, a very simple algebraic structure generalizing at the same time complete Boolean algebras and Krivine realizability algebras, in such a way that it allows us to express in a same setting the theory of forcing (in the sense of Cohen) and the theory of classical realizability (in the sense of Krivine). The main default of these structures is that they are deeply oriented towards the λ-calculus, and that they only allows to faithfully interpret languages in call-by-name. To remediate the situation, we introduce two variants of implicative algebras: disjunctive algebras, centered on the "par" connective of linear logic (but in a non-linear framework) and naturally adapted to languages in call-by-name; and conjunctives algebras, centered on the "tensor" connective of linear logic and adapted to languages in call-by-value. Amongst other things, we prove that disjunctive algebras are particular cases of implicative algebras and that conjunctive algebras can be obtained from disjunctive algebras (by reversing the underlying order). Moreover, we show how to interpret in these frameworks the fragments of Guillaume Munch-Maccagnoni's system L corresponding to a call-by-value calculus (within conjunctive algebras) and to a call-by-name calculus (within disjunctive algebras).
|
8 |
Metadata-Supported Object-Oriented Extension of Dynamic Geometry SoftwareTI / Objektno-orijentisano proširenje softvera zadinamičku geometriju podržano metapodacimaRadaković Davorka 10 October 2019 (has links)
<p>Nowadays, Dynamic Geometry Software (DGS) is widely accepted as a tool for creating and presenting visually rich interactive teaching and learning materials, called dynamic drawings. Dynamic drawings are specified by writing expressions in functional domain-specific languages. Due to wide acceptance of DGS, there has arisen a need for their extensibility, by adding new semantics and visual objects (visuals). We have developed a programming framework for the Dynamic Geometry Software, SLGeometry, with a genericized functional language and corresponding expression evaluator that act as a framework into which specific semantics is embedded in the form of code annotated with metadata. The framework transforms an ordinary expression tree evaluator into an object-oriented one, and provide guidelines and examples for creation of interactive objects with dynamic properties, which participate in evaluation optimization at run-time. Whereas other DGS are based on purely functional expression evaluators, our solution has advantages of being more general, easy to implement, and providing a natural way of specifying object properties in the user interface, minimizing typing and syntax errors.LGeometry is implemented in C# on the .NET Framework. Although attributes are a preferred mechanism to provide association of declarative information with C# code, they have certain restrictions which limit their application to representing complex structured metadata. By developing a metadata infrastructure which is independent of attributes, we were able to overcome these limitations. Our solution, presented in this dissertation, provides extensibility to simple and complex data types, unary and binary operations, type conversions, functions and visuals, thus enabling developers to seamlessly add new features to SLGeometry by implementing them as C# classes annotated with metadata. It also provides insight into the way a domain specific functional language of dynamic geometry software can be genericized and customized for specific needs by extending or restricting the set of types, operations, type conversions, functions and visuals.Furthermore, we have conducted experiments with several groups of students of mathematics and high school pupils, in order to test how our approach compares to the existing practice. The experimental subjects tested mathematical games using interactive visual controls (UI controls) and sequential behavior controllers. Finally, we present a new evaluation algorithm, which was compared to the usual approach employed in DGS and found to perform well, introducing advantages while maintaining the same level of performance.</p> / <p>U današnje vreme softver za dinamičku geometriju (DGS) je široko prihvaćen kao alat za kreiranje i prezentovanje vizuelno bogatih interaktivnih nastavnih materijala i materijala za samostalno učenje, nazvanih dinamičkim crtežima. Kako je raslo prihvatanje softvera za dinamičku geometriju, tako je i rasla potreba da se oni proširuju, dodajući im novu semantiku i vizualne objekte. Razvili smo programsko okruženje za softver za dinamičku geometriju, SLGeometry, sa generičkim funkcionalnim jezikom i odgovarajućim evaluatorom izraza koji čini okruženje u kom su ugrađene specifične semantike u obliku koda označenog metapodacima. Ovo okruženje pretvara uobičajen evaluator stabla izraza u objektno orijentiran, te daje uputstva i primere za stvaranje interaktivnih objekata sa dinamičkim osobinama, koji sudeluju u optimizaciji izvršenja tokom izvođenja. Dok se drugi DGS-ovi temelje na čisto funkcionalnim evaluatorima izraza, naše rješenje ima prednosti jer je uopštenije, lako za implementaciju i pruža prirodan način navođenja osobina objekta u korisničkom interfejsu, minimizirajući kucanje i sintaksne greške. SLGeometry je implementirana u jeziku C# .NET Framework-a. Iako su atributi preferiran mehanizam, koji povezuje C# kôd sa deklarativnim informacijama, oni imaju određena ograničenja koja limitiraju njihovu primenu za predstavljanje složenih strukturiranih metapodataka. Razvijanjem infrastrukture metapodataka koja je nezavisna od atributa, uspeli smo prevladati ta ograničenja. Naše rešenje, predstavljeno u ovoj disertaciji, pruža proširivost: jednostavnim i složenim vrstama podataka, unarnim i binarnim operacijama, konverzijama tipova, funkcijama i vizuelnim objektima, omogućavajući time programerima da neprimetno dodaju nove osobine u SLGeometry implementirajući ih kao C# klase označene metapodacima.</p>
|
9 |
Stratagems for effective function evaluation in computational chemistrySkone, Gwyn S. January 2010 (has links)
In recent years, the potential benefits of high-throughput virtual screening to the drug discovery community have been recognized, bringing an increase in the number of tools developed for this purpose. These programs have to process large quantities of data, searching for an optimal solution in a vast combinatorial range. This is particularly the case for protein-ligand docking, since proteins are sophisticated structures with complicated interactions for which either molecule might reshape itself. Even the very limited flexibility model to be considered here, using ligand conformation ensembles, requires six dimensions of exploration - three translations and three rotations - per rigid conformation. The functions for evaluating pose suitability can also be complex to calculate. Consequently, the programs being written for these biochemical simulations are extremely resource-intensive. This work introduces a pure computer science approach to the field, developing techniques to improve the effectiveness of such tools. Their architecture is generalized to an abstract pattern of nested layers for discussion, covering scoring functions, search methods, and screening overall. Based on this, new stratagems for molecular docking software design are described, including lazy or partial evaluation, geometric analysis, and parallel processing implementation. In addition, a range of novel algorithms are presented for applications such as active site detection with linear complexity (PIES) and small molecule shape description (PASTRY) for pre-alignment of ligands. The various stratagems are assessed individually and in combination, using several modified versions of an existing docking program, to demonstrate their benefit to virtual screening in practical contexts. In particular, the importance of appropriate precision in calculations is highlighted.
|
Page generated in 0.0848 seconds