Spelling suggestions: "subject:"[een] ALGORITHMIC"" "subject:"[enn] ALGORITHMIC""
161 |
A Computational Architecture Methodology For Design In Traditional Tissue: The Case Of KalkanKutay, Karabag 01 September 2010 (has links) (PDF)
This study targets to address the problem of ' / new building in a traditional setting' / , utilizing computational design tools. The intention is to provide a methodology for analysis of architectural features of a traditional tissue and moreover propose computational design strategies utilizing algorithms for processing analytical data serving new building design. In the introduction part, this goal is exposed as well as a critic discussion based on a conservationist perspective for contemporary examples of computational design.
Contemporary digital tools and methods employed in the field of architecture are discussed with a focus on algorithmic approaches, followed by a brief history for utilization of computational tools and digital design philosophy in the following chapter. Moreover organic architecture is discussed as a complex entity composed of integral elements and their relations, as well as the designer
|
162 |
Algorithmic Trading : Hidden Markov Models on Foreign Exchange DataIdvall, Patrik, Jonsson, Conny January 2008 (has links)
<p>In this master's thesis, hidden Markov models (HMM) are evaluated as a tool for forecasting movements in a currency cross. With an ever increasing electronic market, making way for more automated trading, or so called algorithmic trading, there is constantly a need for new trading strategies trying to find alpha, the excess return, in the market.</p><p>HMMs are based on the well-known theories of Markov chains, but where the states are assumed hidden, governing some observable output. HMMs have mainly been used for speech recognition and communication systems, but have lately also been utilized on financial time series with encouraging results. Both discrete and continuous versions of the model will be tested, as well as single- and multivariate input data.</p><p>In addition to the basic framework, two extensions are implemented in the belief that they will further improve the prediction capabilities of the HMM. The first is a Gaussian mixture model (GMM), where one for each state assign a set of single Gaussians that are weighted together to replicate the density function of the stochastic process. This opens up for modeling non-normal distributions, which is often assumed for foreign exchange data. The second is an exponentially weighted expectation maximization (EWEM) algorithm, which takes time attenuation in consideration when re-estimating the parameters of the model. This allows for keeping old trends in mind while more recent patterns at the same time are given more attention.</p><p>Empirical results shows that the HMM using continuous emission probabilities can, for some model settings, generate acceptable returns with Sharpe ratios well over one, whilst the discrete in general performs poorly. The GMM therefore seems to be an highly needed complement to the HMM for functionality. The EWEM however does not improve results as one might have expected. Our general impression is that the predictor using HMMs that we have developed and tested is too unstable to be taken in as a trading tool on foreign exchange data, with too many factors influencing the results. More research and development is called for.</p>
|
163 |
A Cyclic Analog to Digital Converter for CMOS image sensorsLevski Dimitrov, Deyan January 2014 (has links)
The constant strive for improvement of digital video capturing speeds together with power efficiency increase, has lead to tremendous research activities in the image sensor readout field during the past decade. The improvement of lithography and solid-state technologies provide the possibility of manufacturing higher resolution image sensors. A double resolution size-up, leads to a quadruple readout speed requirement, if the same capturing frame rate is to be maintained. The speed requirements of conventional serial readout techniques follow the same curve and are becoming more challenging to design, thus employing parallelism in the readout schemes appears to be inevitable for relaxing the analog readout circuits and keeping the same capturing speeds. This transfer however imposes additional demands to parallel ADC designs, mainly related to achievable accuracy, area and power. In this work a 12-bit Cyclic ADC (CADC) aimed for column-parallel readout implementation in CMOS image sensors is presented. The aim of the conducted study is to cover multiple CADC sub-component architectures and provide an analysis onto the latter to a mid-level of depth. A few various Multiplying DAC (MDAC) structures have been re-examined and a preliminary redundant signed-digit CADC design based on a 1.5-bit modified flip-over MDAC has been conducted. Three comparator architectures have been explored and a dynamic interpolative Sub-ADC is presented. Finally, some weak spots degrading the performance of the carried-out design have been analyzed. As an architectural improvement possibility two MDAC capacitor mismatch error reduction techniques have been presented.
|
164 |
Complexity issues in counting, polynomial evaluation and zero findingBriquel, Irénée 29 November 2011 (has links) (PDF)
In the present thesis, we try to compare the classical boolean complexity with the algebraic complexity, by studying problems related to polynomials. We consider the algebraic models from Valiant and from Blum, Shub and Smale (BSS). To study the algebraic complexity classes, one can start from results and open questions from the boolean case, and look at their translation in the algebraic context. The comparison of the results obtained in the two settings will then boost our understanding of both complexity theories. The first part follows this framework. By considering a polynomial canonically associated to a boolean formula, we get a link between boolean complexity issues on the formula and algebraic complexity problems on the polynomial. We studied the complexity of computing the polynomial in Valiant's model, as a function of the complexity of the boolean formula. We found algebraic counterparts to some boolean results. Along the way, we could also use some algebraic methods to improve boolean results, in particular by getting better counting reductions. Another motivation for algebraic models of computation is to offer an elegant framework to the study of numerical algorithms. The second part of this thesis follows this approach. We started from new algorithms for the search of approximate zeros of complex systems of n polynomials in n variables. Up to now, those were BSS machine algorithms. We studied the implementation of these algorithms on digital computers, and propose an algorithm using floating arithmetic for this problem.
|
165 |
On the parameterized complexity of finding short winning strategies in combinatorial gamesScott, Allan Edward Jolicoeur 29 April 2010 (has links)
A combinatorial game is a game in which all players have perfect information and there is no element of chance; some well-known examples include othello, checkers, and chess. When people play combinatorial games they develop strategies, which can be viewed as a function which takes as input a game position and returns a move to make from that position. A strategy is winning if it guarantees the player victory despite whatever legal moves any opponent may make in response. The classical complexity of deciding whether a winning strategy exists for a given position in some combinatorial game has been well-studied both in general and for many specific combinatorial games. The vast majority of these problems are, depending on the specific properties of the game or class of games being studied, complete for either PSPACE or EXP.
In the parameterized complexity setting, Downey and Fellows initiated a study of "short" (or k-move) winning strategy problems. This can be seen as a generalization of "mate-in-k" chess problems, in which the goal is to find a strategy which checkmates your opponent within k moves regardless of how he responds. In their monograph on parameterized complexity, Downey and Fellows suggested that AW[*] was the "natural home" of short winning strategy problems, but there has been little work in this field since then.
In this thesis, we study the parameterized complexity of finding short winning strategies in combinatorial games. We consider both the general and several specific cases. In the general case we show that many short games are as hard classically as their original variants, and that finding a short winning strategy is hard for AW[P] when the rules are implemented as succinct circuits. For specific short games, we show that endgame problems for checkers and othello are in FPT, that alternating hitting set, hex, and the non-endgame problem for othello are in AW[*], and that short chess is AW[*]-complete. We also consider pursuit-evasion parameterized by the number of cops. We show that two variants of pursuit-evasion are AW[*]-hard, and that the short versions of these problems are AW[*]-complete.
|
166 |
Towards a versatile transport protocolJourjon, Guillaume, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links)
This thesis presents three main contributions that aim to improve the transport layer of the current networking architecture. The transport layer is nowadays dominated by the use of TCP and its congestion control. Recently new congestion control mechanisms have been proposed. Among them, TCP Friendly Hate Control (TFRC) appears to be one of the most complete. Nevertheless this congestion control mechanism, as with TCP, does not take into account either the evolution of the network in terms of Quality of Service and mobility or the evolution of the applications. The first contribution of this thesis is a specialisation of TFRC congestion control to provide a QoS-aware Transport Protocol specifically designed to operate over QoS-enabled networks with bandwidth guarantee mechanisms. This protocol combines a QoS-aware congestion control, which takes into account network-level bandwidth reservations, with full ordered reliability mechanism to provide a transport service similar to TCP. As a result, we obtain the guaranteed throughput at the application level where TCP fails. This protocol is t he first transport protocol compliant with bandwidth guaranteed networks. At the same time the set of network services expands, new technologies have been proposed and deployed at the physical layer. These new technologies are mainly characterised by communications done without wire constraint and the mobility of the end-systems. Furthermore, these technologies are usually deployed on entities where the CPU power and memory storage are limited. The second contribution of this thesis is therefore to propose an adaptation of TFHC to these entities. This is accomplished with the proposition of a new sender-based version of TFHC. This version has been implemented, evaluated and its numerous contributions and advantages compare to usual TFHC version have been demonstrated. Finally, we proposed an optimisation of actual implementations of TFHC. This optimisation first consists in the proposition of an algorithm based on a numerical analysis of the equation used in TFHC and the use of the Newton's algorithm. We furthermore give a first step, with the introduction of a new framework for TFRC, in order to better understand TFHC behaviour and to optimise the computation of the packet loss rate according to loss probability distributions.
|
167 |
Algorithmes de résolution de la dynamique du contact avec impact et frottement / Algorithms of resolution of contact dynamics with impact and frictionHaddouni, Mounia 27 May 2015 (has links)
La simulation des systèmes multicorps avec une dynamique non régulière trouve ses applications dans différents domaines comme l'aéronautique, l'automobile, le ferroviaire, la robotique, la réalité virtuelle et même l'industrie horlogère. Ces industries ont de plus en plus d'exigences sur la rapidité ainsi que la précision des méthodes utilisées pour calculer la dynamique. Par conséquent, la recherche dans le domaine de la mécanique non régulière est très active et a pour objectif constant de proposer des algorithmes plus robustes et plus rapides pour calculer la dynamique ainsi que de développer de meilleurs modèles pour le contact avec ou sans frottement. Les méthodes proposées doivent en plus bien gérer les sauts dans la vitesse et l'accélération des systèmes, ces sauts résultent de phénomènes tels que l'impact et le frottement. Dans ce manuscrit, quelques méthodes d'intégration d'équations différentielles algébriques d'index 3, 2 et 1 sont testées sur plusieurs mécanismes industriels avec contraintes unilatérales et bilatérales. Ces méthodes sont ensuite comparées sur la base de la satisfaction des contraintes bilatérales, de l'efficacité numérique et de leur capacité à gérer une dynamique raide. Cette étude a aussi permis d'apporter une réponse claire sur le choix de la méthode d'intégration pour un système mécanique connaissant ses caractéristiques (nombre de contacts, présence de contraintes bilatérales, dynamique raide...). La deuxième partie de ce travail traite certains problèmes qui sont fréquemment rencontrés dans la simulation des systèmes multicorps, notamment: le phénomène d'accumulation des impacts, la résolution du frottement, ainsi que la gestion des sauts qui peuvent être provoqués par la présence de singularités géométriques. Calculer la dynamique dans ces cas est particulièrement difficile dans le cadre des schémas event-driven. La solution proposée est un schéma d'intégration mixte "event-driven/time-stepping" dont le but est d'utiliser les avantages de chacune des familles d'intégration (event-driven et time-stepping). Notre algorithme est ensuite testé sur de nombreux exemples. / The applications of the nonsmooth multibody systems field cover several fields including aeronautics, automotive, robotics, railway, virtual reality and watch industry to cite a few. These industrial applications have ever more stringent requirements on both accuracy and speed of the numerical methods used for the computation of the dynamics. As a consequence, the research in the nonsmooth mechanics domain is very active, to provide better integration methods for the resolution of the equations of motions and to develop better models for the contact problems with and without friction. Since the nonsmooth mechanics framework allows for jumps in the velocity and in the acceleration of the mechanical systems, the resulting algorithms have to handle such non-smoothness. In this PhD, several numerical schemes for the resolution of index-3, index-2 and index-1 DAEs are compared on industrial benchmarks with bilateral and unilateral constraints. The aim is to improve the efficiency of the Ansys Rigid Body solver which is based on an event-driven integration strategy. Points of comparison include the enforcement of the bilateral constraints, time efficiency and handling the stiff dynamics. This study also aimed at having a clear idea on the choice of the most suitable integration method for a given mechanical system knowing its characteristics (number of contacts, presence of bilateral constraints, stiff dynamics...). The second part discusses several issues that frequently occur in the simulation of multibody systems, namely, the problem of accumulation of impacts, the resolution of friction and handling the jumps resulting from the presence of some geometrical singularities. Dealing with such issues is very difficult, especially in the framework of event-driven schemes. In order to handle these problems, a mixed event-driven/time-stepping scheme is developed which takes advantage of both integration families (event-driven and time-stepping). Several examples are used to validate our methodology.
|
168 |
Kombinatorické hry / Combinatorial Games TheoryValla, Tomáš January 2012 (has links)
Title: Combinatorial Games Theory Author: Tomáš Valla Department / Institute: IUUK MFF UK Supervisor: Prof. RNDr. Jaroslav Nešetřil, DrSc., IUUK MFF UK Abstract: In this thesis we study the complexity that appears when we consider the competitive version of a certain environment or process, using mainly the tools of al- gorithmic game theory, complexity theory, and others. For example, in the Internet environment, one cannot apply any classical graph algorithm on the graph of connected computers, because it usually requires existence of a central authority, that manipu- lates with the graph. We describe a local and distributed game, that in a competitive environment without a central authority simulates the computation of the weighted vertex cover, together with generalisation to hitting set and submodular weight func- tion. We prove that this game always has a Nash equilibrium and each equilibrium yields the same approximation of optimal cover, that is achieved by the best known ap- proximation algorithms. More precisely, the Price of Anarchy of our game is the same as the best known approximation ratio for this problem. All previous results in this field do not have the Price of Anarchy bounded by a constant. Moreover, we include the results in two more fields, related to the complexity of competitive...
|
169 |
Flash-krascher : Ett allvarligt problem på Stockholmsbörsen? / Flash crasches : A severe problem at Nasdaq OMX Stockholm?Roth, Sebastian, Söderström, Madelene January 2018 (has links)
Titel: Flash-krascher – ett allvarligt problem på Stockholmsbörsen? Författare: Madelene Söderström & Sebastian Roth Handledare: Bo Sjö Ämne: Nationalekonomi – Kandidatuppsats inom finans Syfte: Syftet med arbetet är att fördjupa förståelsen kring flash-krascher och vilken påverkan dessa har på handeln av värdepapper som sker på Stockholmsbörsen. Vi hoppas också att studien ger en klarare bild av hur flash-krascher påverkar olika aktörer med koppling till aktiehandeln i Sverige. Metod: Uppsatsen är baserad på en kvalitativ studie utförd med intervjurespondenter med varierande koppling till Stockholmsbörsen och den svenska finansmarknaden. Teori: Uppsatsen utgår främst från tidigare forskning inom ämnet bestående av studier baserade på händelser och data från USA. Annan ekonomisk teori som presenteras i studien är adverse selection. Empiri: Uppsatsen är bestående av sju semistrukturerade intervjuer med aktörer på finansmarknaden. Intervjuerna jämförs med tidigare inträffade händelser i USA för att diskutera möjliga slutsatser om flash-krascher på Stockholmsbörsen. Slutsats: Studien kommer fram till att det är osannolikt att flash-krascher av den magnituden som inträffat i USA 6 maj 2010 inträffar på Stockholmsbörsen idag. Vidare så verkar flash-krascher inte ha särskilt stor påverkan på aktörer på Stockholmsbörsen, däremot kan det finnas en viss oros- och förtroendeproblematik kopplad till flash-krascher som bör tas på allvar. I studien av tidigare forskning finner vi intressanta teorier för hur flash-krascher kan förutses. Vi kan däremot inte dra några slutsatser kring dessa teorier kopplat till Stockholmsbörsen. / Title: Flash crashes – a severe problem at Nasdaq OMX Stockholm? Authors: Madelene Söderström & Sebastian Roth Advisor: Bo Sjö Subject: Bachelor thesis in finance Purpose: The purpose of this study is to understand and critically examine the impact flash crashes might have on the market for securities at Nasdaq OMX Stockholm. Our goal is to provide a clearer view on how flash crashes affect the trade and the market participants. Method: This thesis is a qualitative study based on interviews with respondents with different approach to both Nasdaq OMX Stockholm and the financial market in Sweden. Theory: The thesis is based on earlier studies within the subject made from data and events from United States of America. Other economic theories that the thesis involve is adverse selection. Empirics: The study is predicated around seven semi structured interviews with participants on the financial market in Sweden. The interviews are compared with the earlier events from USA to make for conclusions about flash crashes on Nasdaq OMX Stockholm. Conclusion: We find that it is unlikely that a flash crash of the same magnitude as the May 6, 2010 flash crash will occur on the Nasdaq OMX Stockholm exchange today. Furthermore, flash crashes appear to have little impact on the market participants at Nasdaq OMX Stockholm, though there may be concerns about trust issues following flash crashes that should be considered. While studying some of the earlier research we find interesting theories about ways to predict flash crashes before they have occurred, we can’t make any conclusions about these theories connected to Nasdaq OMX Stockholm though.
|
170 |
Algorithmes de résolution rapide de problèmes mécaniques sur GPU / Fast algorithms solving mechanical problems on GPUBallage, Marion 04 July 2017 (has links)
Dans le contexte de l'analyse numérique en calcul de structures, la génération de maillages conformes sur des modèles à géométrie complexe conduit à des tailles de modèles importantes, et amène à imaginer de nouvelles approches éléments finis. Le temps de génération d'un maillage est directement lié à la complexité de la géométrie, augmentant ainsi considérablement le temps de calcul global. Les processeurs graphiques (GPU) offrent de nouvelles opportunités pour le calcul en temps réel. L'architecture grille des GPU a été utilisée afin d'implémenter une méthode éléments finis sur maillage cartésien. Ce maillage est particulièrement adapté à la parallélisation souhaitée par les processeurs graphiques et permet un gain de temps important par rapport à un maillage conforme à la géométrie. Les formulations de la méthode des éléments finis ainsi que de la méthode des éléments finis étendue ont été reprises afin d'être adaptées à notre méthode. La méthode des éléments finis étendus permet de prendre en compte la géométrie et les interfaces à travers un choix adéquat de fonctions d'enrichissement. Cette méthode discrétise par exemple sans mailler explicitement les fissures, et évite surtout de remailler au cours de leur propagation. Des adaptations de cette méthode sont faites afin de ne pas avoir besoin d'un maillage conforme à la géométrie. La géométrie est définie implicitement par une fonction surfaces de niveau, ce qui permet une bonne approximation de la géométrie et des conditions aux limites sans pour autant s'appuyer sur un maillage conforme. La géométrie est représentée par une fonction surfaces de niveau que nous appelons la densité. La densité est supérieure à 0.5 à l'intérieur du domaine de calcul et inférieure à 0.5 à l'extérieur. Cette fonction densité, définie par ses valeurs aux points noeuds du maillage, est interpolée à l'intérieur de chaque élément. Une méthode d'intégration adaptée à cette représentation géométrique est proposée. En effet, certains éléments sont coupés par la fonction surfaces de niveau et l'intégration de la matrice de raideur ne doit se faire que sur la partie pleine de l'élément. La méthode de quadrature de Gauss qui permet d'intégrer des polynômes de manière exacte n'est plus adaptée. Nous proposons d'utiliser une méthode de quadrature avec des points d'intégration répartis sur une grille régulière et dense. L'intégration peut s'avérer coûteuse en temps de calcul, c'est pour cette raison que nous proposons une technique d'apprentissage donnant la matrice élémentaire de rigidité en fonction des valeurs de la fonction surfaces de niveau aux sommets de l'élément considéré. Cette méthode d'apprentissage permet de grandes améliorations du temps de calcul des matrices élémentaires. Les résultats obtenus après analyse par la méthode des éléments finis standard ou par la méthode des éléments finis sur maillage cartésien ont une taille qui peut croître énormément selon la complexité des modèles, ainsi que la précision des schémas de résolution. Dans un contexte de programmation sur processeurs graphiques, où la mémoire est limitée, il est intéressant d'arriver à compresser ces données. Nous nous sommes intéressés à la compression des modèles et des résultats éléments finis par la transformée en ondelettes. La compression mise en place aidera aussi pour les problèmes de stockage en réduisant la taille des fichiers générés, et pour la visualisation des données. / Generating a conformal mesh on complex geometries leads to important model size of structural finite element simulations. The meshing time is directly linked to the geometry complexity and can contribute significantly to the total turnaround time. Graphics processing units (GPUs) are highly parallel programmable processors, delivering real performance gains on computationally complex, large problems. GPUs are used to implement a new finite element method on a Cartesian mesh. A Cartesian mesh is well adapted to the parallelism needed by GPUs and reduces the meshing time to almost zero. The novel method relies on the finite element method and the extended finite element formulation. The extended finite element method was introduced in the field of fracture mechanics. It consists in enriching the basis functions to take care of the geometry and the interface. This method doesn't need a conformal mesh to represent cracks and avoids refining during their propagation. Our method is based on the extended finite element method, with a geometry implicitly defined, wich allows for a good approximation of the geometry and boundary conditions without a conformal mesh.To represent the model on a Cartesian grid, we use a level set representing a density. This density is greater than 0.5 inside the domain and less than 0.5 outside. It takes 0.5 on the boundary. A new integration technique is proposed, adapted to the geometrical representation. For the element cut by the levet set, only the part full of material has to be integrated. The Gauss quadrature is no longer adapted. We introduce a quadrature method with integration points on a cartesian dense grid.In order to reduce the computational effort, a learning approach is then considered to form the elementary stiffness matrices as function of density values on the vertices of the elements. This learning method reduces the stiffness matrices time computation. Results obtained after analysis by finite element method or the novel finite element method can have important storage size, dependant of the model complexity and the resolution scheme exactitude. Due to the limited direct memory of graphics processing units, the data results are compressed. We compress the model and the element finite results with a wavelet transform. The compression will help for storage issue and also for data visualization.
|
Page generated in 0.0467 seconds