• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 34
  • 26
  • 22
  • 15
  • 7
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 338
  • 68
  • 61
  • 52
  • 40
  • 39
  • 38
  • 36
  • 34
  • 30
  • 29
  • 29
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Design and Implementation of a Low-Power SAR-ADC with Flexible Sample-Rate and Internal Calibration

Lindeberg, Johan January 2014 (has links)
The objective of this Master's thesis was to design and implement a low power Analog to Digital Converter (ADC) used for sensor measurements. In the complete measurement unit, in which the ADC is part of, different sensors will be measured. One set of these sensors are three strain gauges with weak output signals which are to be pre-amplified before being converted. The focus of the application for the ADC has been these sensors as they were considered a limiting factor. The report describes theory for the algorithmic and incremental converter as well as a hybrid converter utilizing both of the two converter structures. All converters are based on one operational amplifier and they operate in repetitive fashions to obtain power efficient designs on a small chip area although at low conversion rates. Two converters have been designed and implemented to different degrees of completeness. One is a 13 bit algorithmic (or cyclic) converter which uses a switching scheme to reduce the problem of capacitor mismatch. This converter was implemented at transistor level and evaluated separately and to some extent also with sub-components. The second converter is a hybrid converter using both the operation of the algorithmic and incremental converter to obtain 16 bits of resolution while still having a fairly high sample rate.
92

Counter-Surveillance in an Algorithmic World

Dutrisac, James George 26 September 2007 (has links)
Surveillance is the act of collecting, analysing, and acting upon information about specific objects, data, or individuals. Recent advances have allowed for the automation of a large part of this process. Of particular interest is the use of computer algorithms to analyse surveillance data. We refer to surveillance that uses this form of analysis as *algorithmic surveillance*. The rapid growth of algorithmic surveillance has left many important questions unasked. Counter-surveillance is the task of making surveillance difficult. To do this, it subverts various components of the surveillance process. Much like surveillance, counter-surveillance has many applications. It is used to critically assess and validate surveillance practices. As well, counter-surveillance serves to protect privacy, civil liberties, and against abuses of surveillance. Unfortunately, counter-surveillance techniques are often considered to be of little constructive use. As such, they are underdeveloped. At present, no counter-surveillance techniques exist that are able to adequately address algorithmic surveillance. In order to develop counter-surveillance methods against algorithmic surveillance, the *process* of surveillance must first be understood. Understanding this process ensures that the necessary components of algorithmic surveillance will be identified and subverted. As such, our research begins by developing a model of the surveillance process. This model consists of three distinct stages: the collection of information, the analysis of that information, and a response to what has been discovered (the action). From our analysis of the structure of surveillance we show that counter-surveillance techniques prior to now primarily address the collection and action stages of the surveillance process. We argue that the neglect of the analysis stage creates significant problems when attempting to subvert algorithmic surveillance, which relies heavily upon a complex analysis of data. As such, we go on to demonstrate how algorithmic analysis may be subverted. To do this, we develop techniques that are able to subvert three common algorithmic analysis techniques: classification, cluster analysis, and association rules. Each of these attacks against algorithmic analysis works surprisingly well and demonstrate significant flaws in current approaches to algorithmic surveillance. / Thesis (Master, Computing) -- Queen's University, 2007-09-18 10:42:21.025
93

Risk diversification framework in algorithmic trading

Yuan, Jiangchuan 22 May 2014 (has links)
We propose a systematic framework for designing adaptive trading strategies that minimize both the mean and the variance of the execution costs. This is achieved by diversifying risk over sequential decisions in discrete time. By incorporating previous trading performance as a state variable, the framework can dynamically adjust the risk-aversion level for future trading. This incorporation also allows the framework to solve the mean-variance problems for different risk aversion factors all at once. After developing this framework, it is then applied to solve three algorithmic trading problems. The first two are trade scheduling problems, which address how to split a large order into sequential small orders in order to best approximate a target price – in our case, either the arrival price, or the Volume-Weighed-Average-Price (VWAP). The third problem is one of optimal execution of the resulting small orders by submitting market and limit orders. Unlike the tradition in both academia and industry of treating the scheduling and order placement problems separately, our approach treats them together and solves them simultaneously. In out-of-sample tests, this unified strategy consistently outperforms strategies that treat the two problems separately.
94

Validation of machine-oriented strategies in chess endgames

Niblett, Timothy B. January 1982 (has links)
This thesis is concerned with the validation of chess endgame strategies. It is also concerned with the synthesis of strategies that can be validated. A strategy for a given player is the specification of the move to be made by that player from any position that may occur. This move may be dependent on the previous moves of both sides. A strategy is said to be correct if following the strategy always leads to an outcome of at least the same game theoretic value as the starting position. We are not concerned with proving the correctness of programs that implement the strategies under consideration. We shall be working with knowledge-based programs which produce playing strategies, and assume that their concrete implementations (in POP2, PROLOG etc.) are correct. The synthesis approach taken attempts to use the large body of heuristic knowledge and theory, accumulated over the centuries by chessmasters, to find playing strategies. Our concern here is to produce structures for representing a chessmaster's knowledge wnich can be analysed within a game theoretic model. The validation approach taken is that a theory of the domain in the form of the game theoretic model of chess provides an objective measure of the strategy followed by a program. Our concern here is to analyse the structures created in the synthesis phase. This is an instance of a general problem, that of quantifying the performance of computing systems. In general to quantify the performance of a system we need,- A theory of the domain. - A specification of the problem to be solved. - Algorithms and/or domain-specific knowledge to be applied to solve the problem.
95

Optimization importance in high-frequency algorithmic trading

Suvorin, Vadim, Sheludchenko, Dmytro January 2012 (has links)
The thesis offers a framework for trading algorithm optimization and tests statistical and economical significance of its performance on American, Swedish and Russian futures markets. The results provide strong support for proposed method, as using the presented ideas one can build an intraday trading algorithm that outperforms the market in long term.
96

Rozvoj algoritmického myšlení žáků ZŠ ve výuce informaticky zaměřených předmětů s využitím SCRATCH / The development of algorithmic thinking in elementary school pupils using the SCRATCH programming tool

Svoboda, Milan January 2018 (has links)
The diploma thesis deals with problems of algorithmization and programming at elementary schools. It examines the influence of teaching the basics of programming in Scratch on the development of pupils' algorithmic thinking and their ability to develop logical thinking and problem solving. The theoretical part deals with the definition of related concepts and the relation of programming to the key competencies defined by the RVP ZV. The practical part evaluates the experience with the teaching of pupils of the 5th and 6th grades of elementary school within the pedagogical experiment, whose aim was to study the influence of teaching with the use of the visual programming language Scratch on the development of pupils' algorithmic thinking.
97

Squelettes algorithmiques méta-programmés : implantations, performances et sémantique / Metaprogrammed algorithmic skeletons : implementations, performances and semantics

Javed, Noman 21 October 2011 (has links)
Les approches de parallélisme structuré sont un compromis entre la parallélisation automatique et la programmation concurrentes et réparties telle qu'offerte par MPI ou les Pthreads. Le parallélisme à squelettes est l'une de ces approches. Un squelette algorithmique peut être vu comme une fonction d'ordre supérieur qui capture un algorithme parallèle classique tel qu'un pipeline ou une réduction parallèle. Souvent la sémantique des squelettes est simple et correspondant à celle de fonctions d'ordre supérieur similaire dans les langages de programmation fonctionnels. L'utilisation combine les squelettes disponibles pour construire son application parallèle. Lorsqu'un programme parallèle est conçu, les performances sont bien sûr importantes. Il est ainsi très intéressant pour le programmeur de disposer d'un modèle de performance, simple mais réaliste. Le parallélisme quasi-synchrone (BSP) offre un tel modèle. Le parallélisme étant présent maintenant dans toutes les machines, du téléphone au super-calculateur, il est important que les modèles de programmation s'appuient sur des sémantiques formelles pour permettre la vérification de programmes. Les travaux menés on conduit à la conception et au développement de la bibliothèque Orléans Skeleton Library ou OSL. OSL fournit un ensemble de squelettes algorithmiques data-parallèles quasi-synchrones. OSL est une bibliothèque pour le langage C++ et utilise des techniques de programmation avancées pour atteindre une bonne efficacité. Les communications se basent sur la bibliothèque MPI. OSL étant basée sur le modèle BSP, il est possible non seulement de prévoir les performances des programmes OSL mais également de fournir une portabilité des performances. Le modèle de programmation d'OSL a été formalisé dans l'assistant de preuve Coq. L'utilisation de cette sémantique pour la preuve de programmes est illustrée par un exemple. / Structured parallelism approaches are a trade-off between automatic parallelisation and concurrent and distributed programming such as Pthreads and MPI. Skeletal parallelism is one of the structured approaches. An algorithmic skeleton can be seen as higher-order function that captures a pattern of a parallel algorithm such as a pipeline, a parallel reduction, etc. Often the sequential semantics of the skeleton is quite simple and corresponds to the usual semantics of similar higher-order functions in functional programming languages. The user constructs a parallel program by combined calls to the available skeletons. When one is designing a parallel program, the parallel performance is of course important. It is thus very interesting for the programmer to rely on a simple yet realistic parallel performance model. Bulk Synchronous Parallelism (BSP) offers such a model. As the parallelism can now be found everywhere from smart-phones to the super computers, it becomes critical for the parallel programming models to support the proof of correctness of the programs developed with them. . The outcome of this work is the Orléans Skeleton Library or OSL. OSL provides a set of data parallel skeletons which follow the BSP model of parallel computation. OSL is a library for C++ currently implemented on top of MPI and using advanced C++ techniques to offer good efficiency. With OSL being based over the BSP performance model, it is possible not only to predict the performances of the application but also provides the portability of performance. The programming model of OSL is formalized using the big-step semantics in the Coq proof assistant. Based on this formal model the correctness of an OSL example is proved.
98

Environnement pour le développement et la preuve de correction systèmatiques de programmes parallèles fonctionnels / Environment for the systematic development and proof of correction of functional parallel programs

Tesson, Julien 08 November 2011 (has links)
Concevoir et implanter des programmes parallèles est une tâche complexe, sujette aux erreurs. La vérification des programmes parallèles est également plus difficile que celle des programmes séquentiels. Pour permettre le développement et la preuve de correction de programmes parallèles, nous proposons de combiner le langage parallèle fonctionnel quasi-synchrone BSML, les squelettes algorithmiques - qui sont des fonctions d’ordre supérieur sur des structures de données réparties offrant une abstraction du parallélisme – et l’assistant de preuve Coq, dont le langage de spécification est suffisamment riche pour écrire des programmes fonctionnels purs et leurs propriétés. Nous proposons un plongement des primitives BSML dans la logique de Coq sous une forme modulaire adaptée à l’extraction de programmes. Ainsi, nous pouvons écrire dans Coq des programmes BSML, raisonner dessus, puis les extraire et les exécuter en parallèle. Pour faciliter le raisonnement sur ceux-ci, nous formalisons le lien entre programmes parallèles, manipulant des structures de données distribuées, et les spécifications, manipulant des structures séquentielles. Nous prouvons ainsi la correction d’une implantation du squelette algorithmique BH, un squelette adapté au traitement de listes réparties dans le modèle de parallélisme quasi synchrone. Pour un ensemble d’applications partant d’une spécification d’un problème sous forme d’un programme séquentiel simple, nous dérivons une instance de nos squelettes, puis nous extrayons un programme BSML avant de l’exécuter sur des machines parallèles. / Parallel program design and implementation is a complex, error prone task. Verifying parallel programs is also harder than verifying sequential ones. To ease the development and the proof of correction of parallel programs, we propose to combine the functional bulk synchronous parallel language BSML; the algorithmic skeleton, that are higher order function on distributed data structures which offer an abstraction of the parallelism ; and the Coq proof assistant, who’s specification language is rich enough to write purely functional programs together with their properties. We propose an embedding of BSML primitives in the Coq logic in a modular form, adapted to program extraction. So we can write BSML programs in Coq, reason on them, extract them and then execute them in parallel. To ease the specification of these programs, we formalise the relation between parallel programs using distributed data structures and specification using sequential data structure. We prove the correctness of an implementation of the BH skeleton. This skeleton is devoted to the treatment of distributed lists in the BSP model. For a set of application, starting from a sequential specification of a problem, we derive an instance of our skeletons, then extract a BSML program which is executed on parallel machines.
99

Desenvolvimento algorítmico e arquitetural para a estimação de movimento na compressão de vídeo de alta definição / Algorithmic and architectural development for motion estimation on high definition video compression

Porto, Marcelo Schiavon January 2012 (has links)
A compressão de vídeo é um tema extremamente relevante no cenário atual, principalmente devido ao crescimento significativo da utilização de vídeos digitais. Sem a compressão, é praticamente impossível enviar ou armazenar vídeos digitais devido à sua grande quantidade de informações, inviabilizando aplicações como televisão digital de alta definição, vídeo conferência, vídeo chamada para celulares etc. O problema vem se tornando maior com o crescimento de aplicações de vídeos de alta definição, onde a quantidade de informação é consideravelmente maior. Diversos padrões de compressão de vídeo foram desenvolvidos nos últimos anos, todos eles podem gerar grandes taxas de compressão. Os padrões de compressão de vídeo atuais obtêm a maior parte dos seus ganhos de compressão explorando a redundância temporal, através da estimação de movimento. No entanto, os algoritmos de estimação de movimento utilizados atualmente não consideram as variações nas características dos vídeos de alta definição. Neste trabalho uma avaliação da estimação de movimento em vídeos de alta definição é apresentada, demonstrando que algoritmos rápidos conhecidos, e largamente utilizados pela comunidade científica, não apresentam os mesmos resultados de qualidade com o aumento da resolução dos vídeos. Isto demonstra a importância do desenvolvimento de novos algoritmos focados em vídeos de altíssima definição, superiores à HD 1080p. Esta tese apresenta o desenvolvimento de novos algoritmos rápidos de estimação de movimento, focados na codificação de vídeos de alta definição. Os algoritmos desenvolvidos nesta tese apresentam características que os tornam menos suscetíveis à escolha de mínimos locais, resultando em ganhos significativos de qualidade em relação aos algoritmos rápidos convencionais, quando aplicados a vídeos de alta definição. Além disso, este trabalho também visa o desenvolvimento de arquiteturas de hardware dedicadas para estes novos algoritmos, igualmente dedicadas a vídeos de alta definição. O desenvolvimento arquitetural é extremamente relevante, principalmente para aplicações de tempo real a 30 quadros por segundo, e também para a utilização em dispositivos móveis, onde requisitos de desempenho e potência são críticos. Todos os algoritmos desenvolvidos foram avaliados para um conjunto de 10 sequências de teste HD 1080p, e seus resultados de qualidade e custo computacional foram avaliados e comparados com algoritmos conhecidos da literatura. As arquiteturas de hardware dedicadas, desenvolvidas para os novos algoritmos, foram descritas em VHDL e sintetizadas para FPGAs e ASIC, em standard cells nas tecnologias 0,18μm e 90nm. Os algoritmos desenvolvidos apresentam ganhos de qualidade para vídeos de alta definição em relação a algoritmos rápidos convencionais, e as arquiteturas desenvolvidas possuem altas taxas de processamento com baixo consumo de recursos de hardware e de potência. / Video compression is an extremely relevant theme in today’s scenario, mainly due to the significant growth in digital video applications. Without compression, it is almost impossible to send or store digital videos, due to the large amount of data that they require, making applications such as high definition digital television, video conferences, mobiles video calls, and others unviable. This demand is increasing since there is a strong growth in high definition video applications, where the amount of information is considerably larger. Many video coding standards were developed in the last few years, all of them can achieve excellent compression rates. A significant part of the compression gains in the current video coding standards are obtained through the exploration of the temporal redundancies by means of the motion estimation process. However, the current motion estimation algorithms do not consider the inherent variations that appear in high and ultra-high definition videos. In this work an evaluation of the motion estimation in high definition videos is presented. This evaluation shows that some well know fast algorithms, that are widely used by the scientific community, do not keep the same quality results when applied to high resolution videos. It demonstrates the relevance of new fast algorithms that are focused on high definition videos. This thesis presents the development of new fast motion estimation algorithms focused in high definition video encoding. The algorithms developed in this thesis show some characteristics that make them more resilient to avoid local minima, when applied to high definition videos. Moreover, this work also aims at the development of dedicated hardware architectures for these new algorithms, focused on high definition videos. The architectural development is extremely relevant, mainly for real time applications at 30 frames per second, and also for mobile applications, where performance and power are critical issues. All developed algorithms were assessed using 10 HD 1080p test video sequences, and the results for quality and computational cost were evaluated and compared against known algorithms from the literature. The dedicated hardware architectures, developed for the new algorithms, were described in VHDL and synthesized for FPGA and ASIC. The ASIC implementation used 0.18μm and 90nm CMOS standard cells technology. The developed algorithms present quality gains in comparison to regular fast algorithms for high definition videos, and the developed architectures presents high processing rate with low hardware resources cost and power consumption.
100

Towards hypertextual music : digital audio, deconstruction and computer music creation

Britton, Sam January 2017 (has links)
This is a study of the way in which digital audio and a number of key associated technologies that rely on it as a framework have changed the creation, production and dissemination of music, as witnessed by my own creative practice. The study is built on my own work as an electronic musician and composer and draws from numerous collaborations with not only other musicians but also researchers and artists, as documented through commissions, performances, academic papers and commercial releases over an 9 year period from 2007 to 2016. I begin by contextualising my own musical practice and outlining some prominent themes associated with the democratisation of computing that the work of this thesis interrogates as a critical framework for the production of musical works. I go on to assess how works using various techniques afforded by digital audio may be interpreted as progressively instantiating a digital ontology of music. In the context of this digital ontology of music I propose a method of analysis and criticism of works explicitly concerned with audio analysis and algorithmic processes based on my interpretation of the concept of `hypertext', wherein the ability for computers to analyse, index and create multi-dimensional, non-linear links between segments of digital audio is best described as hypertextual. In light of this, I contextualise the merits of this reading of music created using these affordances of digital audio through a reading of several key works of 20th century music from a hypertextual perspective, emphasising the role information theory and semiotics have to play in analyses of these works. I proffer this as the beginnings of a useful model for musical composition in the domain of digital audio which I seek to explore through my own practice. I then describe and analyse, both individually and in parallel numerous works I have undertaken that seek to interrogate the intricacies of what it means to work in the domain of digital audio with audio analysis, machine listening, algorithmic and generative computational processes and consider the ways in which aspects of this work might be seen as contributing useful and novel insights into music creation by harnessing properties intrinsic to digital audio as a medium. Finally, I emphasise, based on the music and research presented in the thesis, the extent to which digital audio and the harnessing of increasingly complex computational systems for the production and dissemination of music has changed the ontology of music production, a situation which I interpret as creating both substantial challenges, but also great possibilities for the future of music.

Page generated in 0.0411 seconds