• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 7
  • 5
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 109
  • 109
  • 59
  • 43
  • 41
  • 25
  • 16
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Type Classes and Instance Chains: A Relational Approach

Morris, John Garrett 04 June 2013 (has links)
Type classes, first proposed during the design of the Haskell programming language, extend standard type systems to support overloaded functions. Since their introduction, type classes have been used to address a range of problems, from typing ordering and arithmetic operators to describing heterogeneous lists and limited subtyping. However, while type class programming is useful for a variety of practical problems, its wider use is limited by the inexpressiveness and hidden complexity of current mechanisms. We propose two improvements to existing class systems. First, we introduce several novel language features, instance chains and explicit failure, that increase the expressiveness of type classes while providing more direct expression of current idioms. To validate these features, we have built an implementation of these features, demonstrating their use in a practical setting and their integration with type reconstruction for a Hindley-Milner type system. Second, we define a set-based semantics for type classes that provides a sound basis for reasoning about type class systems, their implementations, and the meanings of programs that use them.
72

Teaching Algebra through Functional Programming:An Analysis of the Bootstrap Curriculum

Lee, Robert 16 March 2013 (has links) (PDF)
Bootstrap is a computer-programming curriculum that teaches students to program video games using Racket, a functional programming language based on algebraic syntax. This study investigated the relationship between learning to program video games from a Bootstrap course and the resulting effect on students' understanding of algebra. Courses in three different schools, lasting about six weeks each, were studied. Control and treatment groups were given a pre and post algebra assessment. A qualitative component consisting of observations and interviews was also used to further triangulate findings. Statistical analysis revealed that students who completed the Bootstrap course gained a significantly better understanding of variables and a suggestive improvement in understanding functions. In the assessments, students failed to demonstrate a transfer of the advanced concepts of function composition and piecewise functions from programming to algebraic notation. Interviews with students demonstrated that with coaching, students were able to relate functions written in Racket to functions written in algebraic notation, but were not yet able to transfer their experience of function composition from programming to algebra.
73

Trustworthy, Useful Languages for Probabilistic Modeling and Inference

Toronto, Neil B. 12 June 2014 (has links) (PDF)
The ideals of exact modeling, and of putting off approximations as long as possible, make Bayesian practice both successful and difficult. Languages for modeling probabilistic processes, whose implementations answer questions about them under asserted conditions, promise to ease much of the difficulty. Unfortunately, very few of these languages have mathematical specifications. This makes them difficult to trust: there is no way to distinguish between an implementation error and a feature, and there is no standard by which to prove optimizations correct. Further, because the languages are based on the incomplete theories of probability typically used in Bayesian practice, they place seemingly artificial restrictions on legal programs and questions, such as disallowing unbounded recursion and allowing only simple equality conditions. We prove it is possible to make trustworthy probabilistic languages for Bayesian practice by using functional programming theory to define them mathematically and prove them correct. The specifications interpret programs using measure-theoretic probability, which is a complete enough theory of probability that we do not need to restrict programs or conditions. We demonstrate that these trustworthy languages are useful by implementing them, and using them to model and answer questions about typical probabilistic processes. We also model and answer questions about processes that are either difficult or impossible to reason about precisely using typical Bayesian mathematical tools.
74

Functional Programming Languages and the JVM : Comparing Functional Language Compilation Techniques for the JVM / Funktionell Programmering och JVM:en : Jamföring av kompileringstekniker av funktionella språk till JVM:en

Olofsson, Asta January 2023 (has links)
Because of its security, high availability, and automatic memory management, the JVM (Java Virtual Machine) is a desirable execution environment. However, since the JVM was originally made for Java, which is an objectoriented language, it is hard for languages with other paradigms to run on it. This thesis explores the challenges of compiling a functional language for the JVM. We implement a backend generating JVM bytecode as an extension to the compiler of MCore, a minimal functional language based on the lambda calculus. The backend features two different ways of compiling functions. One generates classes representing functions and their environment at compile-time, and one creates these function classes dynamically at runtime. The two different versions of compiling functions are compared in regards to execution speed of the outputted bytecode, compiler output size, and recursion depth. The results show that the two different ways of compiling functions perform well on different metrics. / På grund av sin säkerhet, tillgänglighet och automatiska minneshantering är JVM:en (Java Virtual Machine) en önksvärd exekveringsmiljö. På grund av att JVM:en ursprungligen skapades för programmeringsspråket Java, som är ett objektorienterat språk, så är det svårt för språk som följer andra paradigmer att köra på JVM:en. Denna uppsats undersöker utmaningarna som uppstår vid kompilering av ett funktionellt språk på JVM:en genom en litteraturstudie. Vi implementerar en backend som genererar JVM bytekod som en utökning av kompilatorn för MCore, ett minimalt funktionellt språk baserat på lambdakalkylen. Backenden implementeras med två tekniker för att kompilera funktioner. Den ena genererar klasser som representerar funktioner och deras miljö under kompileringstiden och den andra genererar dessa funktionsklasser dynamiskt vid körtid. Vi jämför de två teknikerna med avseende på den producerade bytekodens exekveringstid, storlek och rekursionsdjup. Resultaten visar att de två kompilationsteknikerna av funktioner presterar olika på olika mätningar.
75

Functional Programming and Metamodeling frameworks for System Design

Mathaikutty, Deepak Abraham 19 May 2005 (has links)
System-on-Chip (SoC) and other complex distributed hardware/software systems contain heterogeneous components whose behavior are best captured by different models of computations (MoCs). As a result, any system design framework for such systems requires the capability to express heterogeneous MoCs. Although a number of system level design languages (SLDL)s and frameworks have proliferated over the last few years, most of them are lacking in multiple ways. Some of the SLDLs and system design frameworks we have worked with are SpecC, Ptolemy II, SystemC-H, etc. From our analysis of these, we identify their following shortcomings: First, their dependence on specific programming language artifacts (Java or C/C++) make them less amenable to formal analysis. Second, the refinement strategies proposed in the design flows based on these languages lack formal semantics underpinnings making it difficult to prove that refinements preserve correctness, and third, none of the available SLDLs are easily customizable by users. In our work, we address these problems as follows: To alleviate the first problem, we follow Axel Jantsch's paradigm of function-based semantic definitions of MoCs and formulate a functional programming framework called SML-Sys. We illustrate through a number of examples how to model heterogenous computing systems using SML-Sys. Our framework provides for formal reasoning due to its formal semantic underpinning inherited from SML's precise denotational semantics. To handle the second problem and apply refinement strategies at a higher-level, we propose a refinement methodology and provide a semantics preserving transformation library within our framework. To address the third shortcoming, we have developed EWD, which allows users to customize MoC-specific visual modeling syntax defined as a metamodel. EWD is developed using a metamodeling framework GME (Generic Modeling Environment). It allows for automatic design-time syntactic and semantic checks on the models for conformance to their metamodel. Modeling in EWD facilitates saving the model in an XML-based interoperability language (IML) we defined for this purpose. The IML format is in turn automatically translated into Standard ML, or Haskell models. These may then be executed and analyzed either by our existing model analysis tools SMLSys, or the ForSyDe environment. We also generate SMV-based template from the XML representation to obtain verification models. / Master of Science
76

Stream fusion : practical shortcut fusion for coinductive sequence types

Coutts, Duncan January 2011 (has links)
In functional programming it is common practice to build modular programs by composing functions where the intermediate values are data structures such as lists or arrays. A desirable optimisation for programs written in this style is to fuse the composed functions and thereby eliminate the intermediate data structures and their associated runtime costs. Stream fusion is one such fusion optimisation that can eliminate intermediate data structures, including lists, arrays and other abstract data types that can be viewed as coinductive sequences. The fusion transformation can be applied fully automatically by a general purpose optimising compiler. The stream fusion technique itself has been presented previously and many practical implementations exist. The primary contributions of this thesis address the issues of correctness and optimisation: whether the transformation is correct and whether the transformation is an optimisation. Proofs of shortcut fusion laws have typically relied on parametricity by making use of free theorems. Unfortunately, most functional programming languages have semantics for which classical free theorems do not hold unconditionally; additional side conditions are required. In this thesis we take an approach based not on parametricity but on data abstraction. Using this approach we prove the correctness of stream fusion for lists -- encompassing the fusion system as a whole, not merely the central fusion law. We generalise this proof to give a framework for proving the correctness of stream fusion for any abstract data type that can be viewed as a coinductive sequence and give as an instance of the framework, a simple model of arrays. The framework requires that each fusible function satisfies a simple data abstraction property. We give proofs of this property for several standard list functions. Previous empirical work has demonstrated that stream fusion can be an optimisation in many cases. In this thesis we take a more universal view and consider the issue of optimisation independently of any particular implementation or compiler. We make a semi-formal argument that, subject to certain syntactic conditions on fusible functions, stream fusion on lists is strictly an improvement, as measured by the number of allocations of data constructors. This detailed analysis of how stream fusion works may be of use in writing fusible functions or in developing new implementations of stream fusion.
77

Design Decisions for Indie Development of Educational Video Games : A Case Study / Designbeslut för indieutveckling av pedagogiska datorspel : En fallstudie

Kuoppa, Andreas January 2019 (has links)
Educational video games – especially for the PC market – do not seem to perform as well commercially as games from other genres. We argue that there is room for independent – ’indie’ – developers to break into the marketplace, by identifying certain niches and innovating on the genre. This would generate commercial value for such actors and knowledge value for the players. Design decisions of high importance made during development of an educational video game demo at the small Swedish company Toleap Consulting AB were analysed in the pursuit of contributing to effective indie development of such games. Three main problems that arose during development were identified, and three design decisions where implemented to combat these respective problems; (1) Interpreted educational game pattern utilising XML, (2) Function-based game views and (3) Community created assets, open source, and no costly dependencies. In our case, the formulated design decisions effectively solved our problems, and we argue that they generalise. If a developer creating a similar game (educational video game) in a similar situation (independent development with limited resources) encounters one or more of these problems, the suggested design decisions may help the developer solve the problems, in turn making more educational video games available on the market, generating the aforementioned commercial and knowledge values.
78

Abstract interpretation of domain-specific embedded languages

Backhouse, Kevin Stuart January 2002 (has links)
A domain-specific embedded language (DSEL) is a domain-specific programming language with no concrete syntax of its own. Defined as a set of combinators encapsulated in a module, it borrows the syntax and tools (such as type-checkers and compilers) of its host language; hence it is economical to design, introduce, and maintain. Unfortunately, this economy is counterbalanced by a lack of room for growth. DSELs cannot match sophisticated domain-specific languages that offer tools for domainspecific error-checking and optimisation. These tools are usually based on syntactic analyses, so they do not work on DSELs. Abstract interpretation is a technique ideally suited to the analysis of DSELs, due to its semantic, rather than syntactic, approach. It is based upon the observation that analysing a program is equivalent to evaluating it over an abstract semantic domain. The mathematical properties of the abstract domain are such that evaluation reduces to solving a mutually recursive set of equations. This dissertation shows how abstract interpretation can be applied to a DSEL by replacing it with an abstract implementation of the same interface; evaluating a program with the abstract implementation yields an analysis result, rather than an executable. The abstract interpretation of DSELs provides a foundation upon which to build sophisticated error-checking and optimisation tools. This is illustrated with three examples: an alphabet analyser for CSP, an ambiguity test for parser combinators, and a definedness test for attribute grammars. Of these, the ambiguity test for parser combinators is probably the most important example, due to the prominence of parser combinators and their rather conspicuous lack of support for the well-known LL(k) test. In this dissertation, DSELs and their signatures are encoded using the polymorphic lambda calculus. This allows the correctness of the abstract interpretation of DSELs to be proved using the parametricity theorem: safety is derived for free from the polymorphic type of a program. Crucially, parametricity also solves a problem commonly encountered by other analysis methods: it ensures the correctness of the approach in the presence of higher-order functions.
79

Compiling a synchronous programming language into field programmable gate arrays /

Shen, Ying, January 1999 (has links)
Thesis (M.Eng.)--Memorial University of Newfoundland, 1999. / Bibliography: leaves 100-102.
80

Rewriting Concurrent Haskell programs to STM

Silva Neto, Francisco Miranda Soares da 27 February 2014 (has links)
Submitted by Luiz Felipe Barbosa (luiz.fbabreu2@ufpe.br) on 2015-03-09T13:43:25Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DISSERTAÇÃO Francisco Miranda Soares da Silva Neto.pdf: 1968720 bytes, checksum: 60383d7751d95b545cae9a16a83f611c (MD5) / Made available in DSpace on 2015-03-09T13:43:26Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DISSERTAÇÃO Francisco Miranda Soares da Silva Neto.pdf: 1968720 bytes, checksum: 60383d7751d95b545cae9a16a83f611c (MD5) Previous issue date: 2014-02-27 / In recent years, the diminishing rate with which we can increase the amount of transistors in a processor core has slowed down the increase of computers’ power. Moore’s Law appears to be drawing to an end. With it, the assumption that software written today will be more efficiently executed in the future simply due to processors’ evolution is being challenged. On the other hand, parallel applications can still be made more efficient by distributing work among different processors to be executed at the same time, thus reducing overall execution time. To enable parallelization, we must have multiple processor cores. This has led to the popularization of multicore architectures. However, writing parallel applications is not trivial. A program must be either written from the start to be executed in parallel, or later adapted for parallel execution. The programmer has the error-prone task of parallelizing the application through use of concurrency and parallelism constructs. Locking, the most common concurrency option, presents risks for inexperienced programmers, such as the famous Deadlock and Livelock problems. As we move from single core architectures to multicore, our programming languages need to make it easier for the programmers to use concurrency. Many researchers have pointed at Software Transactional Memory (STM) as an answer to that issue, as it is a lock-free, abstract way to guarantee isolated access to shared resources. But adapting for STM a program that uses lock is not simple. Besides being an error-prone task, technical details of the language might require special attention to preserve the program’s behavior. In this dissertation, we propose a set of program transformations for concurrency constructs in Haskell, a purely functional programming language. They may be used to refactor a program’s existing locks into transactional constructs from Haskell’s STM implementation. This allows a programmer to gain the benefits of working on STM even for programs which were already developed using locks. Each transformation is accompanied by execution examples and a discussion on its ability to preserve program behavior. We also present a supporting study, in which a controlled experiment was used to evaluate the benefits of locks or STM for the development of Haskell programs. Although subjects’ opinions tended to favor lock-based concurrency, those which used STM overall committed significantly fewer mistakes and required on average 12% less time to finish their assignments. / Recentemente, a queda na taxa de crescimento da quantidade de transístores integráveis em processadores tem desacelerado o crescimento de poder computacional. A lei de Moore parece aproximar-se de seu fim. Com isso, é desafiada a premissa de que software escrito hoje terá melhor desempenho no futuro simplesmente devido à evolução dos processadores. Ainda assim, aplicações paralelas ainda podem se tornar mais eficientes ao se distribuir trabalho entre diferentes processadores para execução simultânea. Para permitir a paralelização, são necessários múltiplos núcleos de processamento, o que tem levado à popularização de arquiteturas multinúcleo. Entretanto, a escrita de aplicações paralelas não é trivial. Deve-se escrever um programa para execução paralela desde sua concepção, ou adaptá-lo posteriormente para execução paralela. O programador tem a difícil tarefa de paralelização da aplicação através do uso de construções de concorrência e paralelismo. Travas, a mais comum opção para concorrência, apresentam riscos para programadores inexperientes, tais quais os famosos problemas de Deadlock e Livelock. Ao adaptarem-se de arquiteturas de um único núcleo para as de multinúcleo, as linguagens de programação precisam facilitar o uso de concorrência para os programadores. Muitos pesquisadores têm indicado Memória Transacional em Software (STM, do inglês Software Transactional Memory) como a resposta para esse problema, por ser uma forma abstrata e não bloqueante para garantia de acesso isolado a recursos compartilhados. Mas adaptar para STM programas que usam travas não é simples. Além de ser uma atividade propensa a erros, detalhes técnicos da linguagem podem requerer cuidados para se preservar o comportamento do programa. Nesta dissertação, é proposto um conjunto de transformações de programas para construções de concorrência em Haskell, uma linguagem de programação puramente funcional. Elas podem ser usadas para refatorar travas de um programa para uso de construções transacionais da implementação de STM em Haskell. Isso permite ao programador aproveitar os benefícios do trabalho com STM mesmo para programas já desenvolvidos com uso de travas. Cada transformação é acompanhada de exemplos de execução e uma discussão sobre sua capacidade de preservar o comportamento do programa. Também é apresentado um estudo de apoio, no qual um experimento controlado foi usado para avaliar os benefícios do uso de travas ou STM no desenvolvimento de programas em Haskell. Apesar das opiniões dos participantes terem favorecido o uso de travas, aqueles que usaram STM cometeram em geral menos erros e em média precisaram de 12% a menos de tempo para terminar suas tarefas.

Page generated in 0.0385 seconds