• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 34
  • 26
  • 22
  • 15
  • 7
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 338
  • 68
  • 61
  • 52
  • 40
  • 39
  • 38
  • 36
  • 34
  • 30
  • 29
  • 29
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Parallel Evaluation of Numerical Models for Algorithmic Trading / Parallel Evaluation of Numerical Models for Algorithmic Trading

Ligr, David January 2016 (has links)
This thesis will address the problem of the parallel evaluation of algorithmic trading models based on multiple kernel support vector regression. Various approaches to parallelization of the evaluation of these models will be proposed and their suitability for highly parallel architectures, namely the Intel Xeon Phi coprocessor, will be analysed considering specifics of this coprocessor and also specifics of its programming. Based on this analysis a prototype will be implemented, and its performance will be compared to a serial and multi-core baseline pursuant to executed experiments. Powered by TCPDF (www.tcpdf.org)
112

Précision p-adique / p-adic precision

Vaccon, Tristan 03 July 2015 (has links)
Les nombres p-adiques sont un analogue des nombres réels plus proche de l’arithmétique. L’avènement ces dernières décennies de la géométrie arithmétique a engendré la création de nombreux algorithmes utilisant ces nombres. Ces derniers ne peuvent être de manière générale manipulés qu’à précision finie. Nous proposons une méthode, dite de précision différentielle, pour étudier ces problèmes de précision. Elle permet de se ramener à un problème au premier ordre. Nous nous intéressons aussi à la question de savoir quelles bases de Gröbner peuvent être calculées sur les p-adiques. / P-Adic numbers are a field in arithmetic analoguous to the real numbers. The advent during the last few decades of arithmetic geometry has yielded many algorithms using those numbers. Such numbers can only by handled with finite precision. We design a method, that we call differential precision, to study the behaviour of the precision in a p-adic context. It reduces the study to a first-order problem. We also study the question of which Gröbner bases can be computed over a p-adic number field.
113

Towards algorithmic Experience : Redesigning Facebook’s News Feed

Alvarado, Oscar January 2017 (has links)
Algorithms currently have direct implications in our democracies and societies, but they also define mostly all our daily activities as users, defining our decisions and promoting different behaviors. In this context, it is necessary to define and think about how to design the different implications that these algorithms have from a user centered perspective, particularly in social media platforms that have such relevance in our information sources and flow. Therefore, the current thesis provides an introduction to the concept of algorithmic experience, trying to study how to implement it for social media services in cellphone devices. Using a Research through Design methodology supported by interface analysis, document analysis and user design workshops, the present paper provides results grouped in five different areas: algorithmic profiling transparency, algorithmic profiling management, algorithmic awareness, algorithmic user-control and selective algorithmic remembering. These five areas provide a framework capable of promoting requirements and guide the evaluation of algorithmic experience in social media contexts.
114

Statistical Arbitrage in Algorithmic Trading of US Bonds / Statistická arbitráž při algoritmickém obchodování amerických dluhopisů

Juhászová, Jana January 2017 (has links)
This thesis deals with statistical arbitrage as a strategy applied in algorithmic trading of US Treasury bonds in the selected timeframe from 1980 until 2017. Our aim is to prove that a specific event on the treasury market, namely reopening of the bonds, constitutes an arbitrage opportunity that enables the investor to systematically yield extraordinary profits on the market. This thesis includes a theoretical introduction to algorithmic trading and statistical arbitrage. Based on this introduction we formulate hypotheses, which are then tested in the application part by constructing an algorithm that simulates a trading strategy on historical data. Comparing three strategies we determined that this strategy is meaningful, or performs better than a random walk and that it is profitable.
115

On the Effect of Replication of Input Files on the Efficiency and the Robustness of a Set of Computations / Étude de l’effet de la réplication de fichiers d’entrée sur l’efficacité et la robustesse d’un ensemble de calculs

Lambert, Thomas 08 September 2017 (has links)
Avec l’émergence du calcul haute-performance (HPC) et des applications Big Data, de nouvelles problématiques cruciales sont apparues. Parmi elles on trouve le problème du transfert de données, c’est-à-dire des communications entre machines, qui peut génerer des délais lors de gros calculs en plus d’avoir un impact sur la consommation énergétique. La réplication, que ce soit de tâches ou de fichiers, est un facteur qui accroît ces communications, tout en étant un outil quasi-indispensable pour améliorer le parallélisme du calcul et la résistance aux pannes. Dans cette thèse nous nous intéressons à la réplication de fichiers et à son impact sur les communications au travers de deux problèmes. Dans le premier, la multiplication de matrices en parallèle, le but est de limiter autant que possible ces réplications pour diminuer la quantité de données déplacées. Dans le second, l’ordonnancement de la phase « Map » de MapReduce, il existe une réplication initiale qu’il faut utiliser au mieux afin d’obtenir l’ordonnancement le plus rapide ou entraînant le moins de création de nouvelles copies. En plus de la réplication, nous nous intéressons aussi à la comparaison entre stratégies d’ordonnancement statiques (allocations faites en amont du calcul) et dynamiques (allocations faites pendant le calcul) sur ces deux problèmes avec pour objectif de créer des stratégies hybrides mélangeant les deux aspects. Pour le premier problème, le produit de matrices en parallèle, nous nous ramenons à un problème de partition de carré où l’équilibrage de charge est donné en entrée. Cet équilibrage donné, le but est de minimiser la somme des semi-paramètres des rectangles couvrant des zones ainsi créés. Ce problème a déjà été étudié par le passé et nous démontrons de nouveaux résultats. Nous proposons ainsi deux nouveaux algorithmes d’approximation, l’un fondé sur une stratégie récursive et l’autre sur l’usage d’une courbe fractale. Nous présentons également une modélisation alternative, fondée sur un problème similaire de partition de cube, dont nous prouvons la NP-complétude tout en fournissant deux algorithmes d’approximation. Pour finir, nous réalisons également une implémentation pratique du produit de matrices en utilisant nos stratégies d’allocation grâce à la librairie StarPU. Les résultats expérimentaux montrent une amélioration du temps de calcul ainsi qu’une diminution significative des transferts de données lorsqu’on utilise une stratégie statique d’allocation couplée à une technique de vol de tâches. Pour le second problème, l’ordonnancement de la phase « Map » de MapReduce, plusieurs copies des fichiers d’entrée sont distribuées parmi les processeurs disponibles. Le but ici est de faire en sorte que chaque tâche soit attribuée à un processeur possédant son fichier d’entrée tout en ayant le meilleur temps de calcul total. Une autre option étudiée est d’autoriser les tâches nonlocales (attribués à des processeurs ne possédant pas leurs fichiers d’entrée) mais d’en limiter le nombre. Dans cette thèse nous montrons premièrement qu’un algorithme glouton pour ce problème peut être modélisé par un processus de « balls-in-bins » avec choix, impliquant une surcharge (nombre de tâches supplémentaires par rapport à la moyenne) en O(mlogm) où m est le nombre de processeurs. Secondement, dans le cas où les tâches non-locales sont interdites, nous relions le problème à celui de l’orientation de graphes, ce qui permet d’obtenir des algorithmes optimaux et polynomiaux et l’existence d’une assignation presque parfaite avec forte probabilité. Dans le cas où les tâches non locales sont autorisées, nous proposons également des algorithmes polynomiaux et optimaux. Finalement, nous proposons un ensemble de simulations pour montrer l’efficacité de nos méthodes dans le cas de tâches faiblement hétérogènes. / The increasing importance of High Performance Computing (HPC) and Big Data applications creates new issues in parallel computing. One of them is communication, the data transferred from a processor to another. Such data movements have an impact on computational time, inducing delays and increase of energy consumption. If replication, of either tasks or files, generates communication, it is also an important tool to improve resiliency and parallelism. In this thesis, we focus on the impact of the replication of input files on the overall amount of communication. For this purpose, we concentrate on two practical problems. The first one is parallel matrix multiplication. In this problem, the goal is to induce as few replications as possible in order to decrease the amount of communication. The second problem is the scheduling of the “Map” phase in the MapReduce framework. In this case, replication is an input of the problem and this time the goal is to use it in the best possible way. In addition to the replication issue, this thesis also considers the comparison between static and dynamic approaches for scheduling. For consistency, static approaches compute schedules before starting the computation while dynamic approaches compute the schedules during the computation itself. In this thesis we design hybrid strategies in order to take advantage of the pros of both. First, we relate communication-avoiding matrix multiplication with a square partitioning problem, where load-balancing is given as an input. In this problem, the goal is to split a square into zones (whose areas depend on the relative speed of resources) while minimizing the sum of their half-perimeters. We improve the existing results in the literature for this problem with two additional approximation algorithms. In addition we also propose an alternative model using a cube partitioning problem. We prove the NP-completeness of the associated decision problem and we design two approximations algorithms. Finally, we implement the algorithms for both problems in order to provide a comparison of the schedules for matrix multiplication. For this purpose, we rely on the StarPU library. Second, in the Map phase of MapReduce scheduling case, the input files are replicated and distributed among the processors. For this problem we propose two metrics. In the first one, we forbid non-local tasks (a task that is processed on a processor that does not own its input files) and under this constraint, we aim at minimizing the makespan. In the second problem, we allow non-local tasks and we aim at minimizing them while minimizing makespan. For the theoretical study, we focus on tasks with homogeneous computation times. First, we relate a greedy algorithm on the makespan metric with a “ball-into-bins” process, proving that this algorithm produces solutions with expected overhead (the difference between the number of tasks on the most loaded processor and the number of tasks in a perfect distribution) equal to O(mlogm) where m denotes the number of processors. Second, we relate this scheduling problem (with forbidden non-local tasks) to a problem of graph orientation and therefore prove, with the results from the literature, that there exists, with high probability, a near-perfect assignment (whose overhead is at most 1). In addition, there are polynomial-time optimal algorithms. For the communication metric case, we provide new algorithms based on a graph model close to matching problems in bipartite graphs. We prove that these algorithms are optimal for both communication and makespan metrics. Finally, we provide simulations based on traces from a MapReduce cluster to test our strategies with realistic settings and prove that the algorithms we propose perform very well in the case of low or medium variance of the computation times of the different tasks of a job.
116

Data-Snooping Biases in Backtesting / Data-Snooping Biases in Backtesting

Krpálek, Jan January 2016 (has links)
In this paper, we utilize White's Reality Check, White (2000), and Hansen's SPA test, Hansen (2004), to evaluate technical trading rules while quantifying the data-snooping bias. Secondly, we discuss the result with Probability of Backtest Overfitting framework, introduced by Bailey et al. (2015). Hence, the study presents a comprehensive test of momentum trading across the US futures markets from 2004 to 2016. The evidence indicates that technical trading rules have not been pro?table in the US futures markets after correcting for the data snooping bias.
117

Identification de motifs au sein des structures biologiques arborescentes / Pattern identification in biological tree structure

Gaillard, Anne-Laure 30 November 2011 (has links)
Avec l’explosion de la quantité de données biologiques disponible, développer de nouvelles méthodes de traitements efficaces est une problématique majeure en bioinformatique. De nombreuses structures biologiques sont modélisées par des structures arborescentes telles que les structures secondaires d’ARN et l’architecture des plantes. Ces structures contiennent des motifs répétés au sein même de leur structure mais également d’une structure à l’autre. Nous proposons d’exploiter cette propriété fondamentale afin d’améliorer le stockage et le traitement de tels objets.En nous inspirant du principe de filtres sur les séquences, nous définissons dans cette thèse une méthode de filtrage sur les arborescences ordonnées permettant de rechercher efficacement dans une base de données un ensemble d’arborescences ordonnées proches d’une arborescence requête. La méthode se base sur un découpage de l’arborescence en graines et sur une recherche de graines communes entre les structures. Nous définissons et résolvons le problème de chainage maximum sur des arborescences. Nous proposons dans le cas des structures secondaires d’ARN une définition de graines (l−d) centrées.Dans un second temps, en nous basant sur des techniques d’instanciations utilisées, par exemple, en infographie et sur la connaissance des propriétés de redondances au sein des structures biologiques, nous présentons une méthode de compression permettant de réduire l’espace mémoire nécessaire pour le stockage d’arborescences non-ordonnées. Après une détermination des redondances nous utilisons une structure de données plus compacte pour représenter notamment l’architecture de la plante, celle-ci pouvant contenir des informations topologiques mais également géométriques. / The explosion of available biological data urges the need for bioinformatics methods. Manybiological structures are modeled by tree structures such as RNA secondary structure and plantsarchitecture. These structures contain repeating units within their structure, but also betweendifferent structures. We propose to exploit this fundamental property to improve storage andtreatment of such objects.Following the principle of sequence filtering, we define a filtering method on ordered treesto efficiently retrieve in a database a set of ordered trees close from a query. The method isbased on a decomposition of the tree into seeds and the detection of shared seeds between thesestructures. We define and solve the maximum chaining problem on trees. We propose for RNAsecondary structure applications a definition of (l−d) centered seed.Based on instantiation techniques used for instance in computer graphics and the repetitivenessof biological structures, we present a compression method which reduces the memoryspace required for plant architecture storage. A more compact data structure is used in order torepresent plant architecture. The construction of this data structure require the identification ofinternal redundancies and taking into account both topological and geometrical informations.
118

Algoritmické obchodování / Algorithmic trading

Uherek, Jiří January 2014 (has links)
The diploma thesis is focused on algorithmic trading. In the first part the theoretical background is summarized. This part is particularly focused on definition of algorithmic trading, execution mechanisms, quantitative strategies, including problems regarding backtesting, and also on benefits and threats of algorithmic trading in market's point of view. The thesis also offers an introduction to genetic algorithms. In the practical part the strategy using genetic algorithm to find optimal combination of particular strategies is developed. The results showed that using genetic algorithms was beneficial for given data series. They also showed that the size of transaction costs is crucial for strategy performance same as dividing data series into testing sample and validation sample.
119

Raciocínio de agentes musicais composição algorítmica, vida artificial e interatividade em sistemas multiagentes musicais / Musical agents reasoning, algorithmic composition, artificial life and interactivity in multiagent musical systems

Santiago David Davila Benavides 03 September 2012 (has links)
Os múltiplos trabalhos de sistemas multiagentes musicais realizados nos últimos anos demonstram o interesse crescente na pesquisa de sistemas de composição e de performance musical que utilizem a tecnologia de agentes computacionais, sendo que apresentam um interesse maior por aqueles sistemas que integram técnicas de composição algorítmica, componentes de vida artificial e interatividade. Observamos também que a maioria dos trabalhos existentes apresentam muitas limitações em termos de escopo e flexibilidade, normalmente apresentando codificação musical simbólica e a resolução de um único problema, sendo que a motivação é mais técnica do que musical. Nesse contexto, surgem arcabouços voltados à criação de sistemas multiagentes musicais, como o Ensemble e o Interactive Swarm Orchestra, oferecendo flexibilidade para a modelagem e implementação de sistemas desse tipo, diversificando tanto os tipos de aplicação, tendo um propósito composicional ou performático, como os tipos de codificação musical que podem ser utilizados. Partimos da aparição dessas ferramentas para estudar o agente musical a partir de uma perspectiva interna, focando nos seus raciocínios, que são processos que definem o comportamento do agente no ambiente virtual do sistema e que são fundamentais para determinar e melhorar o seu valor composicional. Os arcabouços estudados se diferenciam por permitir a utilização de áudio como possível formato de codificação musical, o aproveitamento da espacialização sonora e a exploração da interatividade nos aplicativos, seja esta apenas entre agentes computacionais ou entre agentes e usuários humanos. Pretendemos portanto, nessa pesquisa, abordar sistemas com essas características. Através de extensões nos arcabouços e estudos de caso com motivação estética pretendemos dar continuidade a esses projetos e ao mesmo tempo validar e divulgar a sua utilização entre os potenciais usuários das ferramentas, como compositores, músicos interessados em performance e outros entusiastas dos sistemas musicais interativos. / Multiple musical multiagent systems have been developed in the last years proving the increasing interest in composition and musical performance systems that exploit intelligent agents technology. Theres an special focus on systems that integrate algorithmic composition techniques, artificial life and interactivity. We can also observe that most of these existing projects show many flexibility and scope limitations, as they normally use symbolic musical notation and they solve a single issue or scenario, as well as they have a technical motivation rather than a musical one. In that context, some musical multiagent systems frameworks as Ensemble and Interactive Swarm Orchestra emerge, trying to help the modeling and development of this kind of musical systems, diversifying the applications\' types, as they can be composition problems or musical performances, and allowing the inclusion of other kind of musical content communication. Through these new tools we study the musical agent from an internal perspective, focusing on its reasoning components, processes that define the behavior of an agent on its system\'s virtual environment and that are essential to determine and improve its compositional value. The studied frameworks show unique features as they support audio as a possible musical notation format; they exploit sound spatialization and they work with interactivity in their applications, including agent-to-agent or human-to-agent interaction. We will explore this type of systems on this research. Through framework extensions and aesthetics-oriented study cases we pretend to continue these projects and validate them at same time. We also will contact potential users for these tools, as composers and musicians interested in performances or other musical interactive systems enthusiasts.
120

Designing a Modern Skeleton Programming Framework for Parallel and Heterogeneous Systems

Ernstsson, August January 2020 (has links)
Today's society is increasingly software-driven and dependent on powerful computer technology. Therefore it is important that advancements in the low-level processor hardware are made available for exploitation by a growing number of programmers of differing skill level. However, as we are approaching the end of Moore's law, hardware designers are finding new and increasingly complex ways to increase the accessible processor performance. It is getting more and more difficult to effectively target these processing resources without expert knowledge in parallelization, heterogeneous computation, communication, synchronization, and so on. To ensure that the software side can keep up, advanced programming environments and frameworks are needed to bridge the widening gap between hardware and software. One such example is the pattern-centric skeleton programming model and in particular the SkePU project. The work presented in this thesis first redesigns the SkePU framework based on modern C++ variadic template metaprogramming and state-of-the-art compiler technology. It then explores new ways to improve performance: by providing new patterns, improving the data access locality of existing ones, and using both static and dynamic knowledge about program flow. The work combines novel ideas with practical evaluation of the approach on several applications. The advancements also include the first skeleton API that allows variadic skeletons, new data containers, and finally an approach to make skeleton programming more customizable without compromising universal portability. / <p>Ytterligare forskningsfinansiärer: EU H2020 project EXA2PRO (801015); SeRC.</p>

Page generated in 0.0621 seconds