Spelling suggestions: "subject:"[een] ALGORITHMIC"" "subject:"[enn] ALGORITHMIC""
61 |
Rozvoj algoritmického myšlení na střední škole pomocí programovatelných robotických systémů / Development of algorithmic thinking in upper secondary education using programmable robotic systemsČerný, Ondřej January 2021 (has links)
Obsah Použité zkratky ...................................................................................................................... 9 Úvod .................................................................................................................................... 10 Výzkumný problém a cíle práce...................................................................................... 11 Využité metody práce...................................................................................................... 12 1 Teoretická východiska................................................................................................. 13 1.1 Informatické myšlení ............................................................................................ 14 1.2 Algoritmické myšlení ........................................................................................... 18 1.2.1 Algoritmus..................................................................................................... 19 1.2.2 Vlastnosti algoritmů ...................................................................................... 20 1.2.3 Způsoby zápisu algoritmů ............................................................................. 22 1.2.4 Základní algoritmické konstrukce...
|
62 |
Computational Intelligence and Complexity Measures for Chaotic Information ProcessingArasteh, Davoud 16 May 2008 (has links)
This dissertation investigates the application of computational intelligence methods in the analysis of nonlinear chaotic systems in the framework of many known and newly designed complex systems. Parallel comparisons are made between these methods. This provides insight into the difficult challenges facing nonlinear systems characterization and aids in developing a generalized algorithm in computing algorithmic complexity measures, Lyapunov exponents, information dimension and topological entropy. These metrics are implemented to characterize the dynamic patterns of discrete and continuous systems. These metrics make it possible to distinguish order from disorder in these systems. Steps required for computing Lyapunov exponents with a reorthonormalization method and a group theory approach are formalized. Procedures for implementing computational algorithms are designed and numerical results for each system are presented. The advance-time sampling technique is designed to overcome the scarcity of phase space samples and the buffer overflow problem in algorithmic complexity measure estimation in slow dynamics feedback-controlled systems. It is proved analytically and tested numerically that for a quasiperiodic system like a Fibonacci map, complexity grows logarithmically with the evolutionary length of the data block. It is concluded that a normalized algorithmic complexity measure can be used as a system classifier. This quantity turns out to be one for random sequences and a non-zero value less than one for chaotic sequences. For periodic and quasi-periodic responses, as data strings grow their normalized complexity approaches zero, while a faster deceasing rate is observed for periodic responses. Algorithmic complexity analysis is performed on a class of certain rate convolutional encoders. The degree of diffusion in random-like patterns is measured. Simulation evidence indicates that algorithmic complexity associated with a particular class of 1/n-rate code increases with the increase of the encoder constraint length. This occurs in parallel with the increase of error correcting capacity of the decoder. Comparing groups of rate-1/n convolutional encoders, it is observed that as the encoder rate decreases from 1/2 to 1/7, the encoded data sequence manifests smaller algorithmic complexity with a larger free distance value.
|
63 |
Algorithmic Trading : Analyse von computergesteuerten Prozessen im Wertpapierhandel unter Verwendung der Multifaktorenregression / Algorithmic Trading : analysis of computer driven processes in securities trading using a multifactor regression modelGomolka, Johannes January 2011 (has links)
Die Elektronisierung der Finanzmärkte ist in den letzten Jahren weit vorangeschritten. Praktisch jede Börse verfügt über ein elektronisches Handelssystem. In diesem Kontext beschreibt der Begriff Algorithmic Trading ein Phänomen, bei dem Computerprogramme den Menschen im Wertpapierhandel ersetzen. Sie helfen dabei Investmententscheidungen zu treffen oder Transaktionen durchzuführen. Algorithmic Trading selbst ist dabei nur eine unter vielen Innovationen, welche die Entwicklung des Börsenhandels geprägt haben. Hier sind z.B. die Erfindung der Telegraphie, des Telefons, des FAX oder der elektronische Wertpapierabwicklung zu nennen. Die Frage ist heute nicht mehr, ob Computerprogramme im Börsenhandel eingesetzt werden. Sondern die Frage ist, wo die Grenze zwischen vollautomatischem Börsenhandel (durch Computer) und manuellem Börsenhandel (von Menschen) verläuft.
Bei der Erforschung von Algorithmic Trading wird die Wissenschaft mit dem Problem konfrontiert, dass keinerlei Informationen über diese Computerprogramme zugänglich sind. Die Idee dieser Dissertation bestand darin, dieses Problem zu umgehen und Informationen über Algorithmic Trading indirekt aus der Analyse von (Fonds-)Renditen zu extrahieren. Johannes Gomolka untersucht daher die Forschungsfrage, ob sich Aussagen über computergesteuerten Wertpapierhandel (kurz: Algorithmic Trading) aus der Analyse von (Fonds-)Renditen ziehen lassen. Zur Beantwortung dieser Forschungsfrage formuliert der Autor eine neue Definition von Algorithmic Trading und unterscheidet mit Buy-Side und Sell-Side Algorithmic Trading zwei grundlegende Funktionen der Computerprogramme (die Entscheidungs- und die Transaktionsunterstützung). Für seine empirische Untersuchung greift Gomolka auf das Multifaktorenmodell zur Style-Analyse von Fung und Hsieh (1997) zurück. Mit Hilfe dieses Modells ist es möglich, die Zeitreihen von Fondsrenditen in interpretierbare Grundbestandteile zu zerlegen und den einzelnen Regressionsfaktoren eine inhaltliche Bedeutung zuzuordnen. Die Ergebnisse dieser Dissertation zeigen, dass man mit Hilfe der Style-Analyse Aussagen über Algorithmic Trading aus der Analyse von (Fonds-)Renditen machen kann. Die Aussagen sind jedoch keiner technischen Natur, sondern auf die Analyse von Handelsstrategien (Investment-Styles) begrenzt. / During the last decade the electronic trading on the stock exchanges advanced rapidly. Today almost every exchange is running an electronic trading system. In this context the term algorithmic trading describes a phenomenon, where computer programs are replacing the human trader, when making investment decisions or facilitating transactions. Algorithmic trading itself stands in a row of many other innovations that helped to develop the financial markets technologically (see for example telegraphy, the telephone, FAX or electronic settlement). Today the question is not, whether computer programs are used or not. The question arising is rather, where the border between automatic, computer driven and human trading can be drawn.
Conducting research on algorithmic trading confronts scientists always with the problem of limited availability of information. The idea of this dissertation is to circumnavigate this problem and to extract information indirectly from an analysis of a time series of (fund)-returns data. The research question here is: Is it possible to draw conclusions about algorithmic trading from an analysis of (funds-)return data? To answer this question, the author develops a complete definition of algorithmic trading. He differentiates between Buy-Side and Sell-Side algorithmic trading, depending on the functions of the computer programs (supporting investment-decisions or transaction management). Further, the author applies the multifactor model of the style analysis, formely introduced by Fung and Hsieh (1997). The multifactor model allows to separate fund returns into regression factors that can be attributed to different reasons. The results of this dissertation do show that it is possible to draw conclusions about algorithmic trading out of the analysis of funds returns. Yet these conclusions cannot be of technical nature. They rather have to be attributed to investment strategies (investment styles).
|
64 |
Contributions to the theory and applications of tree languagesHögberg, Johanna January 2007 (has links)
This thesis is concerned with theoretical as well as practical aspects of tree languages. It consists of an introduction and eight papers, organised into three parts. The first part is devoted to algorithmic learning of regular tree languages, the second part to bisimulation minimisation of tree automata, and the third part to tree-based generation of music. We now summarise the contributions made in each part. In Part I, an inference algorithm for regular tree languages is presented. The algorithm is a generalisation of a previous algorithm by Angluin, and the learning task is to derive, with the aid of a so-called MAT-oracle, the minimal (partial and deterministic) finite tree automaton M that recognises the target language U over some ranked alphabet Σ. The algorithm executes in time O(|Q| |δ| (m + |Q|)), where Q and δ are the set of states and the transition table of M , respectively, r is the maximal rank of any symbol in Σ, and m is the maximum size of any answer given by the oracle. This improves on a similar algorithm by Sakakibara as dead states are avoided both in the learning phase and in the resulting automaton. Part I also describes a concrete implementation which includes two extensions of the basic algorithm. In Part II, bisimulation minimisation of nondeterministic weighted tree automata (henceforth, wta) is introduced in general, and for finite tree automata (which can be seen as wta over the Boolean semiring) in particular. The concepts of backward and forward bisimulation are extended to wta, and efficient minimisation algorithms are developed for both types of bisimulation. In the special case where the underlying semiring of the input automaton is either cancellative or Boolean, these minimisation algorithms can be further optimised by adapting existing partition refinement algorithms by Hopcroft, Paige, and Tarjan. The implemented minimisation algorithms are demonstrated on a typical task in natural language processing. In Part III, we consider how tree-based generation can be applied to algorithmic composition. An algebra is presented whose operations act on musical pieces, and a system capable of generating simple musical pieces is implemented in the software Treebag: starting from input which is either generated by a regular tree grammar or provided by the user via a digital keyboard, a number of top-down tree transducers are applied to generate a tree over the operations provided by the music algebra. The evaluation of this tree yields the musical piece generated.
|
65 |
Fairness in RankingsZehlike, Meike 26 April 2022 (has links)
Künstliche Intelligenz und selbst-lernende Systeme, die ihr Verhalten aufgrund
vergangener Entscheidungen und historischer Daten adaptieren, spielen eine im-
mer größer werdende Rollen in unserem Alltag. Wir sind umgeben von einer
großen Zahl algorithmischer Entscheidungshilfen, sowie einer stetig wachsenden
Zahl algorithmischer Entscheidungssysteme. Rankings und sortierte Listen von
Suchergebnissen stellen dabei das wesentliche Instrument unserer Onlinesuche nach
Inhalten, Produkten, Freizeitaktivitäten und relevanten Personen dar. Aus diesem
Grund bestimmt die Reihenfolge der Suchergebnisse nicht nur die Zufriedenheit der
Suchenden, sondern auch die Chancen der Sortierten auf Bildung, ökonomischen
und sogar sozialen Erfolg. Wissenschaft und Politik sorgen sich aus diesem Grund
mehr und mehr um systematische Diskriminierung und Bias durch selbst-lernende
Systeme.
Um der Diskriminierung im Kontext von Rankings und sortierten Suchergeb-
nissen Herr zu werden, sind folgende drei Probleme zu addressieren: Zunächst
müssen wir die ethischen Eigenschaften und moralischen Ziele verschiedener Sit-
uationen erarbeiten, in denen Rankings eingesetzt werden. Diese sollen mit den
ethischen Werten der Algorithmen übereinstimmen, die zur Vermeidung von diskri-
minierenden Rankings Anwendung finden. Zweitens ist es notwendig, ethische
Wertesysteme in Mathematik und Algorithmen zu übersetzen, um sämtliche moralis-
chen Ziele bedienen zu können. Drittens sollten diese Methoden einem breiten
Publikum zugänglich sein, das sowohl Programmierer:innen, als auch Jurist:innen
und Politiker:innen umfasst. / Artificial intelligence and adaptive systems, that learn patterns from past behavior
and historic data, play an increasing role in our day-to-day lives. We are surrounded
by a vast amount of algorithmic decision aids, and more and more by algorithmic
decision making systems, too. As a subcategory, ranked search results have become
the main mechanism, by which we find content, products, places, and people online.
Thus their ordering contributes not only to the satisfaction of the searcher, but also
to career and business opportunities, educational placement, and even social success
of those being ranked. Therefore researchers have become increasingly concerned
with systematic biases and discrimination in data-driven ranking models.
To address the problem of discrimination and fairness in the context of rank-
ings, three main problems have to be solved: First, we have to understand the
philosophical properties of different ranking situations and all important fairness
definitions to be able to decide which method would be the most appropriate for a
given context. Second, we have to make sure that, for any fairness requirement in
a ranking context, a formal definition that meets such requirements exists. More
concretely, if a ranking context, for example, requires group fairness to be met, we
need an actual definition for group fairness in rankings in the first place. Third,
the methods together with their underlying fairness concepts and properties need
to be available to a wide range of audiences, from programmers, to policy makers
and politicians.
|
66 |
Online Communities and HealthVillacis Calderon, Eduardo David 26 August 2022 (has links)
People are increasingly turning to online communities for entertainment, information, and social support, among other uses and gratifications. Online communities include traditional online social networks (OSNs) such as Facebook but also specialized online health communities (OHCs) where people go specifically to seek social support for various health conditions. OHCs have obvious health ramifications but the use of OSNs can also influence people's mental health and health behaviors. The use of online communities has been widely studied but in the health context their exploration has been more limited. Not only are online communities being extensively used for health purposes, but there is also increasing concern that the use of online communities can itself affect health. Therefore, there is a need to better understand how such technologies influence people's health and health behaviors.
The research in this dissertation centers on examining how online community use influences health and health behaviors. There are three studies in this dissertation. The first study develops a conceptual model to explain the process whereby the characteristics of a request from an OHC user for social support is answered by a wounded healer, who is a person leveraging their own experiences with health challenges to help others. The second study investigates how algorithmic fairness, accountability, and transparency of an OSN newsfeed algorithm influence the users' attitudes and beliefs about childhood vaccines and ultimately their vaccine hesitancy. The third study examines how OSN social overload, through OSN use, can lead to psychological distress and received social support. The research contributes theoretical and practical insights to the literature on the use of online communities in the health context. / Doctor of Philosophy / People use online communities to socialize and to seek out information and help. Online social networks (OSNs) such as Facebook are large communities on which people segregate into smaller groups to discuss joint interests. Some online communities cater to specific needs, such as online health communities (OHCs), which provide platforms for people to talk about the health challenges they or their loved ones are facing. Online communities do not intentionally seek controversy, but because they welcome all perspectives, they have contributed to phenomena such as vaccine hesitancy. Moreover, social overload from the use of OSNs can have both positive and negative psychological effects on users. This dissertation examines the intersection of online communities and health. The first study explains how the interaction of the characteristics of a request for social support made by an OHC user and the characteristics of the wounded healer drive the provision of social support. The model that is developed shows the paths through which the empathy of the wounded healer and the characteristics of the request lead to motivation to provide help to those in need on an OHC. In the second study, the role of characteristics of a newsfeed algorithm, specifically fairness, accountability, and transparency (FAT), in the development of childhood vaccine hesitancy is examined. The findings show that people's perceptions of the newsfeed algorithm's FAT increase their negative attitudes toward vaccination and their perceived behavioral control over vaccination. The third study examines how different uses of OSNs can influence the relationships between social overload and psychological distress and received social support. The findings show how OSN use can be tailored to decrease negative and increase positive psychological consequences without discontinuing use.
|
67 |
Methodology for the production and delivery of generative music for the personal listener : systems for realtime generative music productionMurphy, Michael J. January 2013 (has links)
This thesis will describe a system for the production of generative music through specific methodology, and provide an approach for the delivery of this material. The system and body of work will be targeted specifically at the personal listening audience. As the largest current consumer of music in all genres of music, this represents the largest and most applicable market to develop such a system for. By considering how recorded media compares to concert performance, it is possible to ascertain which attributes of performance may be translated to a generative media. In addition, an outline of how fixed media has changed how people listen to music directly will be considered. By looking at these concepts an attempt is made to create a system which satisfies societies need for music which is not only commodified and easily approached, but also closes the qualitative gap between a static delivery medium and concert based output. This is approached within the context of contemporary classical music. Furthermore, by considering the development and fragmentation of the personal listening audience through technological developments, a methodology for the delivery of generative media to a range of devices will be investigated. A body of musical work will be created which attempts to realise these goals in a qualitative fashion. These works will span the development of the composition methodology, and the algorithmic methods covered. A conclusion based on the possibilities of each system with regard to its qualitative output will form the basis for evaluation. As this investigation is seated within the field of music, the musical output and composition methodology will be considered as the primary deciding factor of a system's feasibility. The contribution of this research to the field will be a methodology for the composition and production of algorithmic music in realtime, and a feasible method for the delivery of this music to a wide audience.
|
68 |
Model checking infinite-state systems : generic and specific approachesTo, Anthony Widjaja January 2010 (has links)
Model checking is a fully-automatic formal verification method that has been extremely successful in validating and verifying safety-critical systems in the past three decades. In the past fifteen years, there has been a lot of work in extending many model checking algorithms over finite-state systems to finitely representable infinitestate systems. Unlike in the case of finite systems, decidability can easily become a problem in the case of infinite-state model checking. In this thesis, we present generic and specific techniques that can be used to derive decidability with near-optimal computational complexity for various model checking problems over infinite-state systems. Generic techniques and specific techniques primarily differ in the way in which a decidability result is derived. Generic techniques is a “top-down” approach wherein we start with a Turing-powerful formalismfor infinitestate systems (in the sense of being able to generate the computation graphs of Turing machines up to isomorphisms), and then impose semantic restrictions whereby the desired model checking problem becomes decidable. In other words, to show that a subclass of the infinite-state systems that is generated by this formalism is decidable with respect to the model checking problem under consideration, we will simply have to prove that this subclass satisfies the semantic restriction. On the other hand, specific techniques is a “bottom-up” approach in the sense that we restrict to a non-Turing powerful formalism of infinite-state systems at the outset. The main benefit of generic techniques is that they can be used as algorithmic metatheorems, i.e., they can give unified proofs of decidability of various model checking problems over infinite-state systems. Specific techniques are more flexible in the sense they can be used to derive decidability or optimal complexity when generic techniques fail. In the first part of the thesis, we adopt word/tree automatic transition systems as a generic formalism of infinite-state systems. Such formalisms can be used to generate many interesting classes of infinite-state systems that have been considered in the literature, e.g., the computation graphs of counter systems, Turing machines, pushdown systems, prefix-recognizable systems, regular ground-tree rewrite systems, PAprocesses, order-2 collapsible pushdown systems. Although the generality of these formalisms make most interesting model checking problems (even safety) undecidable, they are known to have nice closure and algorithmic properties. We use these nice properties to obtain several algorithmic metatheorems over word/tree automatic systems, e.g., for deriving decidability of various model checking problems including recurrent reachability, and Linear Temporal Logic (LTL) with complex fairness constraints. These algorithmic metatheorems can be used to uniformly prove decidability with optimal (or near-optimal) complexity of various model checking problems over many classes of infinite-state systems that have been considered in the literature. In fact, many of these decidability/complexity results were not previously known in the literature. In the second part of the thesis, we study various model checking problems over subclasses of counter systems that were already known to be decidable. In particular, we consider reversal-bounded counter systems (and their extensions with discrete clocks), one-counter processes, and networks of one-counter processes. We shall derive optimal complexity of various model checking problems including: model checking LTL, EF-logic, and first-order logic with reachability relations (and restrictions thereof). In most cases, we obtain a single/double exponential reduction in the previously known upper bounds on the complexity of the problems.
|
69 |
Quantifying the Effects of Correlated Covariates on Variable Importance Estimates from Random ForestsKimes, Ryan Vincent 01 January 2006 (has links)
Recent advances in computing technology have lead to the development of algorithmic modeling techniques. These methods can be used to analyze data which are difficult to analyze using traditional statistical models. This study examined the effectiveness of variable importance estimates from the random forest algorithm in identifying the true predictor among a large number of candidate predictors. A simulation study was conducted using twenty different levels of association among the independent variables and seven different levels of association between the true predictor and the response. We conclude that the random forest method is an effective classification tool when the goals of a study are to produce an accurate classifier and to provide insight regarding the discriminative ability of individual predictor variables. These goals are common in gene expression analysis, therefore we apply the random forest method for the purpose of estimating variable importance on a microarray data set.
|
70 |
Beyond the piano : the super instrument : widening the instrumental capacities in the context of the piano music of the 21st centuryKallionpaa, Maria E. January 2014 (has links)
Thanks to the development of new technology, musical instruments are no more tied to their existing acoustic or technical limitations as almost all parameters can be augmented or modified in real time. An increasing number of composers, performers, and computer programmers have thus become interested in different ways of "supersizing" acoustic instruments in order to open up previously-unheard instrumental sounds. This leads us to the question of what constitutes a super instrument and what challenges does it pose aesthetically and technically? This work explores the effects that super instruments have on the identity of a given solo instrument, on the identity of a composition and on the experience of performing this kind of repertoire. The super instrument comes to be defined as a bundle of more than one instrumental lines that achieve a coherent overall identity when generated in real time. On the basis of my own personal experience of performing the works discussed in this dissertation, super instruments vary a great deal but each has a transformative effect on the identity and performance practice of the pianist. This discussion approaches the topic from the viewpoint of contemporary keyboard music, showcasing examples of super instrument compositions of the 21st century. Thus, the main purposes of this practise based research project is to explore the essence and role of piano or toy piano in a super instrument constellation, as well as the performer's role as a "super instrumentalist". I consider these issues in relation to case studies drawn from my own compositional work and a selection of works composed by Karlheinz Essl and Jeff Brown.
|
Page generated in 0.0652 seconds