• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 34
  • 26
  • 22
  • 15
  • 7
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 338
  • 68
  • 61
  • 52
  • 40
  • 39
  • 38
  • 36
  • 34
  • 30
  • 29
  • 29
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Problemas de alocação e precificação de itens / Allocation and pricing problems

Schouery, Rafael Crivellari Saliba 14 February 2014 (has links)
Nessa tese consideramos problemas de alocação e precificação de itens, onde temos um conjunto de itens e um conjunto de compradores interessados em tais itens. Nosso objetivo é escolher uma alocação de itens a compradores juntamente com uma precificação para tais itens para maximizar o lucro obtido, considerando o valor máximo que um comprador está disposto a pagar por um determinado item. Em particular, focamos em três problemas: o Problema da Compra Máxima, o Problema da Precificação Livre de Inveja e o Leilão de Anúncios de Segundo Preço. O Problema da Compra Máxima e o Problema da Precificação Livre de Inveja modelam o problema que empresas que vendem produtos ou serviços enfrentam na realidade, onde é necessário escolher corretamente os preços dos produtos ou serviços disponíveis para os clientes para obter um lucro interessante. Já o Leilão de Anúncios de Segundo Preço modela o problema enfrentado por empresas donas de ferramentas de busca que desejam vender espaço para anunciantes nos resultados das buscas dos usuários. Ambas as questões, tanto a precificação de produtos e serviços quanto a alocação de anunciantes em resultados de buscas, são de grande relevância econômica e, portanto, são interessantes de serem atacadas dos pontos de vista teórico e prático. Nosso foco nesse trabalho é considerar algoritmos de aproximação e algoritmos de programação inteira mista para os problemas mencionados, apresentando novos resultados superiores àqueles conhecidos previamente na literatura, bem como determinar a complexidade computacional destes problemas ou de alguns de seus casos particulares de interesse. / In this thesis we consider allocation and pricing problems, where we have a set of items and a set of consumers interested in such items. Our objective is to choose an allocation of items to consumers, considering the maximum value a consumer is willing to pay in a specific item. In particular, we focus in three problems: the Max-Buying Problem, the Envy-Free Pricing Problem and the Second-Price Ad Auction. The Max-Buying Problem and the Envy-Free Pricing Problem model a problem faced in reality by companies that sell products or services, where it is necessary to correctly choose the price of the products or services available to clients in order to obtain an interesting profit. The Second-Price Ad Auction models the problem faced by companies that own search engines and desire to sell space for advertisers in the search results of the users. Both questions, the pricing of items and services and the allocation of advertisers in search results are of great economical relevance and, for this, are interesting to be attacked from a theoretical and a practical perspective. Our focus in this work is to consider approximation algorithms and mixed integer programming algorithms for the aforementioned problems, presenting new results superior than the previously known in the literature, as well as to determine the computational complexity of such problems or some of their interesting particular cases.
202

Algoritmos de negociação com dados de alta frequência / Algorithmic Trading with high frequency data

Uematsu, Akira Arice de Moura Galvão 20 March 2012 (has links)
Em nosso trabalho analisamos os dados provenientes da BM&F Bovespa, a bolsa de valores de São Paulo, no período de janeiro de 2011, referentes aos índices: BOVESPA (IND), o mini índice BOVESPA (WIN) e a taxa de câmbio (DOL). Estes dados são de alta frequência e representam vários aspectos da dinâmica das negociações. No conjunto de valores encontram-se horários e datas dos negócios, preços, volumes oferecidos e outras características da negociação. A primeira etapa da tese foi extrair as informações necessárias para análises a partir de um arquivo em protocolo FIX, foi desenvolvido um programa em R com essa finalidade. Em seguida, estudamos o carácter da dependência temporal nos dados, testando as propriedades de Markov de um comprimento de memória fixa e variável. Os resultados da aplicação mostram uma grande variabilidade no caráter de dependência, o que requer uma análise mais aprofundada. Acreditamos que esse trabalho seja de muita importância em futuros estudos acadêmicos. Em particular, a parte do carácter específico do protocolo FIX utilizado pela Bovespa. Este era um obstáculo em uma série de estudos acadêmicos, o que era, obviamente, indesejável, pois a Bovespa é um dos maiores mercados comerciais do mundo financeiro moderno. / In our work we analyzed data from BM&F Bovespa, the stock exchange in São Paulo. The dataset refers to the month January 2011 and is related to BOVESPA index (IND), mini BOVESPA index (WIN) and the exchange tax (DOL). These, are high frequency data representing various aspects of the dynamic of negotiations. The array of values includes the dates/times of trades, prices, volumes offered for trade and others trades characteristics. The first stage of the thesis was to extract information to the analysis from an archive in FIX protocol, it was developed a program in R with this aim. Afterwards, we studied the character of temporal dependence in the data, testing Markov properties of a fixed and variable memory length. The results of this application show a great variability in the character of dependence, which requires further analysis. We believe that our work is of great importance in future academic studies. In particular, the specific character of the FIX protocol used by Bovespa. This was an obstacle in a number of academic studies, which was, obviously, undesirable since Bovespa is one of the largest trading markets in the modern financial world.
203

Ramverk för att motverka algoritmisk snedvridning

Engman, Clara, Skärdin, Linnea January 2019 (has links)
Användningen av artificiell intelligens (AI) har tredubblats på ett år och och anses av vissa vara det viktigaste paradigmskiftet i teknikhistorien. Den rådande AI-kapplöpningen riskerar att underminera frågor om etik och hållbarhet, vilket kan ge förödande konsekvenser. Artificiell intelligens har i flera fall visat sig avbilda, och till och med förstärka, befintliga snedvridningar i samhället i form av fördomar och värderingar. Detta fenomen kallas algoritmisk snedvridning (algorithmic bias). Denna studie syftar till att formulera ett ramverk för att minimera risken att algoritmisk snedvridning uppstår i AI-projekt och att anpassa det efter ett medelstort konsultbolag. Studiens första del är en litteraturstudie på snedvridningar - både ur ett kognitivt och ur ett algoritmiskt perspektiv. Den andra delen är en undersökning av existerande rekommendationer från EU, AI Sustainability Center, Google och Facebook. Den tredje och sista delen består av ett empiriskt bidrag i form av en kvalitativ intervjustudie, som har använts för att justera ett initialt ramverk i en iterativ process. / In the use of the third generation Artificial Intelligence (AI) for the development of products and services, there are many hidden risks that may be difficult to detect at an early stage. One of the risks with the use of machine learning algorithms is algorithmic bias which, in simplified terms, means that implicit prejudices and values are comprised in the implementation of AI. A well-known case is Google’s image recognition algorithm, which identified black people as gorillas. The purpose of this master thesis is to create a framework with the aim to minimise the risk of algorithmic bias in AI development projects. To succeed with this task, the project has been divided into three parts. The first part is a literature study of the phenomenon bias, both from a human perspective as well as from an algorithmic bias perspective. The second part is an investigation of existing frameworks and recommendations published by Facebook, Google, AI Sustainability Center and the EU. The third part consists in an empirical contribution in the form of a qualitative interview study which has been used to create and adapt an initial general framework. The framework was created using an iterative methodology where two whole iterations were performed. The first version of the framework was created using insights from the literature studies as well as from existing recommendations. To validate the first version, the framework was presented for one of Cybercom’s customers in the private sector, who also got the possibility to ask questions and give feedback regarding the framework. The second version of the framework was created using results from the qualitative interview studies with machine learning experts at Cybercom. As a validation of the applicability of the framework on real projects and customers, a second qualitative interview study was performed together with Sida - one of Cybercom’s customers in the public sector. Since the framework was formed in a circular process, the second version of the framework should not be treated as constant or complete. The interview study at Sida is considered the beginning of a third iteration, which in future studies could be further developed.
204

Structured higher-order algorithmic differentiation in the forward and reverse mode with application in optimum experimental design

Walter, Sebastian 07 May 2012 (has links)
In dieser Arbeit werden Techniken beschrieben, die es erlauben (höhere) Ableitungen und Taylorapproximationen solcher Computerprogramme effizient zu berechnen. Auch inbesondere dann, wenn die Programme Algorithmen der numerischen linearen Algebra (NLA) enthalten. Im Gegensatz zur traditionellen algorithmischen Differentiation (AD), bei der die zugrunde liegenden Algorithmen um zusätzliche Befehlere erweitert werden, sind in dieser Arbeit die Zerlegungen durch definierende Gleichungen charakterisiert. Basierend auf den definierenden Gleichungen werden Strukturausnutzende Algorithmen hergeleitet. Genauer, neuartige Algorithmen für die Propagation von Taylorpolynomen durch die QR, Cholesky und reell-symmetrischen Eigenwertzerlegung werden präsentiert. Desweiteren werden Algorithmen für den Rückwärtsmodus der AD hergeleitet, welche im Wesentlichen nur die Faktoren der Zerlegungen benötigen. Im Vergleich zum traditionellen Ansatz, bei dem alle Zwischenergebnisse gespeichert werden, ist dies eine Reduktion von O(N^3) zu O(N^2) für Algorithmen mit O(N^3) Komplexität. N ist hier die Größe der Matrix. Zusätzlich kann bestehende, hoch-optimierte Software verwendet werden. Ein Laufzeitvergleich zeigt, dass dies im Vergleich zum traditionellen Ansatz zu einer Beschleunigung in der Größenordnung 100 führen kann. Da die NLA Funktionen als Black Box betrachtet werden, ist desweiteren auch der Berechnungsgraph um Größenordnungen kleiner. Dies bedeutet, dass Software, welche Operator Overloading benutzt, weniger Overhead hervorruft und auch weniger Speicher benötigt. / This thesis provides a framework for the evaluation of first and higher-order derivatives and Taylor series expansions through large computer programs that contain numerical linear algebra (NLA) functions. It is a generalization of traditional algorithmic differentiation (AD) techniques in that NLA functions are regarded as black boxes where the inputs and outputs are related by defining equations. Based on the defining equations, structure-exploiting algorithms are derived. More precisely, novel algorithms for the propagation of Taylor polynomials through the QR, Cholesky,- and real-symmetric eigenvalue decomposition are shown. Recurrences for the reverse mode of AD, which require essentially only the returned factors of the decomposition, are also derived. Compared to the traditional approach where all intermediates of an algorithm are stored, this is a reduction from O(N^3) to O(N^2) for algorithms with O( N^3) complexity. N denotes the matrix size. The derived algorithms make it possible to use existing high-performance implementations. A runtime comparison shows that the treatment of NLA functions as atomic can be more than one order of magnitude faster than an automatic differentiation of the underlying algorithm. Furthermore, the computational graph is orders of magnitudes smaller. This reduces the additional memory requirements, as well as the overhead, of operator overloading techniques to a fraction.
205

Algoritmisk aktiehandel : Ett experiment i att förutsäga aktiemarknaden med hjälp av neurala nätverk / Algorithmic stocktrading : An experiment in predicting the stockmarket using neural networks

Mellgren, Henrik January 2019 (has links)
Ursprungligen fungerade aktier som ett medel för företag att säkerställa finansiering för nya satsningar och investeringar.    Företag ställde ut aktiebrev som investerare köpte och till skillnad mot ett vanligt banklån behövde inte företagen betala tillbaka dessa aktier. Detta säkerställde att de investerar som köpte aktier var tvungna att vara långsiktiga för ett aktieköp kunde vara för livet. Aktiemarknaden är en marknad där dessa aktier kan handlas mellan investerare. Fördelen med detta är att en investerare kan avbryta sin investering och växla in den i förtid. Nackdelen med aktiemarknaden är att detta innebar att det långsiktiga perspektivet inte längre var nödvändigt för en investerare. För många investerare blev det viktigare hur aktiemarknaden kommer utvecklas ”imorgon” snarare än om företaget hen investerare i gör en lönsam investering på tio års sikt. Koppling till företagens egentliga värde riskerar därmed brytas. Konsekvens av detta är att spekulativa bubblar byggs upp på aktiemarknaden i cykler med efterföljande krascher som medför stora förmögenhetsförluster för vanliga privatpersoner och stora omvälvningar i samhället i stort. Denna uppsats utforskar möjligheten att använda maskininlärning som ett verktyg för att kunna värdera aktier och förutspå kommande kursrörelser med syfte att hjälp investerare på aktiemarknaden att fatta bättre investeringsbeslut. Den tar avstamp i de datakällor som aktiemarknadsanalytiker använder för att studera denna marknad – det vill säga med hjälp av tekniska och fundamentala data.  Ett system har konstruerats för att dels klassificera bolag med hjälp av algoritmen ”artificiella neurala nätverk” och fundamentala data och dels för att förutsäga kommande dagskurser med hjälp av algoritmen ”Long Short Term Memory network” och tekniska data. Algoritmerna har utvärderats var för sig och som ett gemensamt system genom att simulerad handel utförs på en given test och valideringsperiod. Den hypotes som prövats är att ”att processa fundamentala data genom ett ANN och tekniska data genom ett LSTM kommer genera bra investeringsrekommendationer”. Resultaten som studien genererat har givet som konsekvens att denna hypotes inte har kunnat motbevisas.
206

SALZA : mesure d’information universelle entre chaînes pour la classificationet l’inférence de causalité / SALZA : universal information measure between strings for classifiation and causality

Revolle, Marion 25 October 2018 (has links)
Les données sous forme de chaîne de symboles sont très variées (ADN, texte, EEG quantifié,…) et ne sont pas toujours modélisables. Une description universelle des chaînes de symboles indépendante des probabilités est donc nécessaire. La complexité de Kolmogorov a été introduite en 1960 pour répondre à cette problématique. Le concept est simple : une chaîne de symboles est complexe quand il n'en existe pas une description courte. La complexité de Kolmogorov est le pendant algorithmique de l’entropie de Shannon et permet de définir la théorie algorithmique de l’information. Cependant, la complexité de Kolmogorov n’est pas calculable en un temps fini ce qui la rend inutilisable en pratique.Les premiers à rendre opérationnelle la complexité de Kolmogorov sont Lempel et Ziv en 1976 qui proposent de restreindre les opérations de la description. Une autre approche est d’utiliser la taille de la chaîne compressée par un compresseur sans perte. Cependant ces deux estimateurs sont mal définis pour le cas conditionnel et le cas joint, il est donc difficile d'étendre la complexité de Lempel-Ziv ou les compresseurs à la théorie algorithmique de l’information.Partant de ce constat, nous introduisons une nouvelle mesure d’information universelle basée sur la complexité de Lempel-Ziv appelée SALZA. L’implémentation et la bonne définition de notre mesure permettent un calcul efficace des grandeurs de la théorie algorithmique de l’information.Les compresseurs sans perte usuels ont été utilisés par Cilibrasi et Vitányi pour former un classifieur universel très populaire : la distance de compression normalisée [NCD]. Dans le cadre de cette application, nous proposons notre propre estimateur, la NSD, et montrons qu’il s’agit d’une semi-distance universelle sur les chaînes de symboles. La NSD surclasse la NCD en s’adaptant naturellement à davantage de diversité des données et en définissant le conditionnement adapté grâce à SALZA.En utilisant les qualités de prédiction universelle de la complexité de Lempel-Ziv, nous explorons ensuite les questions d’inférence de causalité. Dans un premier temps, les conditions algorithmiques de Markov sont rendues calculables grâce à SALZA. Puis en définissant pour la première l’information dirigée algorithmique, nous proposons une interprétation algorithmique de la causalité de Granger algorithmique. Nous montrons, sur des données synthétiques et réelles, la pertinence de notre approche. / Data in the form of strings are varied (DNA, text, quantify EEG) and cannot always be modeled. A universal description of strings, independent of probabilities, is thus necessary. The Kolmogorov complexity was introduced in 1960 to address the issue. The principle is simple: a string is complex if a short description of it does not exist. The Kolmogorov complexity is the counterpart of the Shannon entropy and defines the algorithmic information theory. Yet, the Kolmogorov complexity is not computable in finit time making it unusable in practice.The first ones to make operational the Kolmogorov complexity are Lempel and Ziv in 1976 who proposed to restrain the operations of the description. Another approach uses the size of the compressed string by a lossless data compression algorithm. Yet these two estimators are not well-defined regarding the joint and conditional complexity cases. So, compressors and Lempel-Ziv complexity are not valuable to estimate algorithmic information theory.In the light of this observation, we introduce a new universal information measure based on the Lempel-Ziv complexity called SALZA. The implementation and the good definition of our measure allow computing efficiently values of the algorithmic information theory.Usual lossless compressors have been used by Cilibrasi and Vitányi to define a very popular universal classifier: the normalized compression distance [NCD]. As part of this application, we introduce our own estimator, called the NSD, and we show that the NSD is a universal semi-distance between strings. NSD surpasses NCD because it gets used to a large data set and uses the adapted conditioning with SALZA.Using the accurate universal prediction quality of the Lempel-Ziv complexity, we explore the question of causality inference. At first, we compute the algorithmic causal Markov condition thanks to SALZA. Then we define, for the first time, the algorithmic directed information and based on it we introduce the algorithmic Granger causality. The relevance of our approach is demonstrated on real and synthetic data.
207

Evolvering av Biologiskt Inspirerade Handelsalgoritmer / Evolving Biologically Inspired Trading Algorithms

Gabrielsson, Patrick January 2012 (has links)
One group of information systems that have attracted a lot of attention during the past decade are financial information systems, especially systems pertaining to financial markets and electronic trading. Delivering accurate and timely information to traders substantially increases their chances of making better trading decisions.Since the dawn of electronic exchanges the trading community has seen a proliferation of computer-based intelligence within the field, enabled by an exponential growth of processing power and storage capacity due to advancements in computer technology. The financial benefits associated with outperforming the market and gaining leverage over the competition has fueled the research of computational intelligence in financial information systems. This has resulted in a plethora of different techniques.The most prevalent techniques used within algorithmic trading today consist of various machine learning technologies, borrowed from the field of data mining. Neural networks have shown exceptional predictive capabilities time and time again.One recent machine learning technology that has shown great potential is Hierarchical Temporal Memory (HTM). It borrows concepts from neural networks, Bayesian networks and makes use of spatiotemporal clustering techniques to handle noisy inputs and to create invariant representations of patterns discovered in its input stream. In a previous paper [1], an initial study was carried-out where the predictive performance of the HTM technology was investigated within algorithmic trading of financial markets. The study showed promising results, in which the HTM-based algorithm was profitable across bullish-, bearish and horizontal market trends, yielding comparable results to its neural network benchmark. Although, the previous work lacked any attempt to produce near optimal trading models.Evolutionary optimization methods are commonly regarded as superior to alternative methods. The simplest evolutionary optimization technique is the genetic algorithm, which is based on Charles Darwin's evolutionary theory of natural selection and survival of the fittest. The genetic algorithm combines exploration and exploitation in the search for optimal models in the solution space.This paper extends the HTM-based trading algorithm, developed in the previous work, by employing the genetic algorithm as an optimization method. Once again, neural networks are used as the benchmark technology since they are by far the most prevalent modeling technique used for predicting financial markets. Predictive models were trained, validated and tested using feature vectors consisting of technical indicators, derived from the E-mini S&P 500 index futures market.The results show that the genetic algorithm succeeded in finding predictive models with good performance and generalization ability. The HTM models outperformed the neural network models, but both technologies yielded profitable results with above average accuracy. / Program: Magisterutbildning i informatik
208

Generating rhyming poetry using LSTM recurrent neural networks

Peterson, Cole 30 April 2019 (has links)
Current approaches to generating rhyming English poetry with a neural network involve constraining output to enforce the condition of rhyme. We investigate whether this approach is necessary, or if recurrent neural networks can learn rhyme patterns on their own. We compile a new dataset of amateur poetry which allows rhyme to be learned without external constraints because of the dataset’s size and high frequency of rhymes. We then evaluate models trained on the new dataset using a novel framework that automatically measures the system’s knowledge of poetic form and generalizability. We find that our trained model is able to generalize the pattern of rhyme, generate rhymes unseen in the training data, and also that the learned word embeddings for rhyming sets of words are linearly separable. Our model generates a couplet which rhymes 68.15% of the time; this is the first time that a recurrent neural network has been shown to generate rhyming poetry a high percentage of the time. Additionally, we show that crowd-source workers can only distinguish between our generated couplets and couplets from our dataset 63.3% of the time, indicating that our model generates poetry with coherency, semantic meaning, and fluency comparable to couplets written by humans. / Graduate
209

De l’innovation des instruments de politique publique : développement d'une méthode de conception combinatoire autour d'un langage algorithmique et application au dispositif des certificats d’économie d’énergie / Innovation of public policy instruments : development of a combinatory design method via an algorithmic language and application to the energy saving certificates scheme

Baïz, Adam 06 December 2018 (has links)
En dépit de ses diverses caractérisations, la nature innovante des instruments de politique publique semble consacrer trois types d’innovation : (a) soit, des imitations, à un glissement de modalités près, d’instruments préexistants ; (b) soit des hybridations d’instruments plus ou moins contraignants ; (c) soit des assemblages d’instruments de nature méta-instrumentale. En admettant alors que tous les instruments découlent les uns des autres par le biais de ces trois chemins de conception, nous avons cherché à produire une méthode de conception fondant concomitamment, et autour d’un même modèle-objet, une capacité d’identification ex post des instruments innovants et une capacité ex ante d’innovation.Au terme d’une démarche de recherche-intervention, nous avons convenu de définir les instruments comme autant de chaînes de causalité préfigurées de l’action collective, et avons formulé un langage algorithmique autour d’un certain nombre d’éléments - des acteurs, des actions, des événements, des opérateurs logiques et des vecteurs d’impact – afin de les caractériser de façon ostensive. En consacrant une méthode d’évaluation de la nouveauté et de l’effectivité instrumentales, nous avons dès lors pu établir qu’un instrument est innovant s’il constitue une nouvelle chaîne de causalité opérant effectivement dans la réalité technico-sociale. De façon corollaire, innover un instrument revient alors à dessiner une nouvelle chaîne de causalité, ou à modifier les acteurs et les actions qui la composent.Afin d’illustrer et d’éprouver cette méthode de conception que nous qualifions de combinatoire, nous avons enfin cherché à interroger le caractère innovant du dispositif des certificats d’économie (CEE) et ce, en proposant un protocole d'évaluation et en formulant diverses pistes d'innovation. / Although it is diversely characterized, the innovative nature of instruments seems to follow three types of innovation: (a) imitations of existing instruments; (b) hybridizations of more or less coercive instruments; (c) or meta-conglomerations of instruments. After admitting that all instruments originate from one another through these three innovation paths and a unique object model, we tried to provide a new design method that would both enable the identification of innovative instruments and their design.Within the frame of intervention research, we agreed to define an instrument as a specific intended causal chain of public action, and formulated an algorithmic language along some instrumental elements (actors, actions, events, logical operators and impact vectors) in order to characterize instruments in an ostensive way. After developing a method to evaluate the newness and the effectivity of any instrument, we could more precisely define an innovative instrument as a new causal chain that is effectively implemented in the technical and social reality. As a corollary, innovating an instrument consists in designing a new causal chain, or modifying the actors and actions in it.Eventually, and in order to apply and test what we called a combinatory design method, we chose to question the innovative nature of the Energy Savings Certificates scheme (ESC). For this purpose, we elaborated an evaluation protocol and formulated several possible lines of innovation.
210

Structured arrows : a type-based framework for structured parallelism

Castro, David January 2018 (has links)
This thesis deals with the important problem of parallelising sequential code. Despite the importance of parallelism in modern computing, writing parallel software still relies on many low-level and often error-prone approaches. These low-level approaches can lead to serious execution problems such as deadlocks and race conditions. Due to the non-deterministic behaviour of most parallel programs, testing parallel software can be both tedious and time-consuming. A way of providing guarantees of correctness for parallel programs would therefore provide significant benefit. Moreover, even if we ignore the problem of correctness, achieving good speedups is not straightforward, since this generally involves rewriting a program to consider a (possibly large) number of alternative parallelisations. This thesis argues that new languages and frameworks are needed. These language and frameworks must not only support high-level parallel programming constructs, but must also provide predictable cost models for these parallel constructs. Moreover, they need to be built around solid, well-understood theories that ensure that: (a) changes to the source code will not change the functional behaviour of a program, and (b) the speedup obtained by doing the necessary changes is predictable. Algorithmic skeletons are parametric implementations of common patterns of parallelism that provide good abstractions for creating new high-level languages, and also support frameworks for parallel computing that satisfy the correctness and predictability requirements that we require. This thesis presents a new type-based framework, based on the connection between structured parallelism and structured patterns of recursion, that provides parallel structures as type abstractions that can be used to statically parallelise a program. Specifically, this thesis exploits hylomorphisms as a single, unifying construct to represent the functional behaviour of parallel programs, and to perform correct code rewritings between alternative parallel implementations, represented as algorithmic skeletons. This thesis also defines a mechanism for deriving cost models for parallel constructs from a queue-based operational semantics. In this way, we can provide strong static guarantees about the correctness of a parallel program, while simultaneously achieving predictable speedups.

Page generated in 0.0583 seconds