• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 34
  • 26
  • 22
  • 15
  • 7
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 338
  • 68
  • 61
  • 52
  • 40
  • 39
  • 38
  • 36
  • 34
  • 30
  • 29
  • 29
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Strojové učení v algoritmickém obchodování / Machine Learning in Algorithmic Trading

Bureš, Michal January 2021 (has links)
This thesis is dedicated to the application of machine learning methods to algorithmic trading. We take inspiration from intraday traders and implement a system that predicts future price based on candlestick patterns and technical indicators. Using forex and US stocks tick data we create multiple aggregated bar representations. From these bars we construct original features based on candlestick pattern clustering by K-Means and long-term features derived from standard technical indicators. We then setup regression and classification tasks for Extreme Gradient Boosting models. From their predictions we extract buy and sell trading signals. We perform experiments with eight different configurations over multiple assets and trading strategies using walk-forward validation. The results report Sharpe ratios and mean profits of all the combinations. We discuss the results and recommend suitable configurations. In overall our strategies outperform randomly selected strategies. Furthermore, we provide and discuss multiple opportunities for further research.
252

DEVELOPMENT OF AN OPEN-SOURCE TOOLBOX FOR DESIGN AND ANALYSIS OF ACTIVE DEBRIS REMEDIATION ARCHITECTURES

Joshua David Fitch (16360641) 15 June 2023 (has links)
<p> Orbital Debris is a growing challenge for the Space Industry. The increasing density of derelict objects in high-value orbital regimes is resulting in more conjunction warnings and break-up events with cascading repercussions on active satellites and spacecraft. The recent rapid growth of the commercial space industry, in particular proliferated satellite constellations, has placed orbital debris remediation at the forefront of Space Industry efforts. The need to remove existing debris, combined with a growing demand for active satellite life extension services, has created an emerging market for space logistics, in particular spacecraft capable of rendezvous and docking, orbital refueling, debris deorbiting, or object relocation. This market has seen numerous companies emerge with multi-purpose on-orbit servicing platforms. This ecosystem poses technological, economical, and policy questions to decision-makers looking to acquire platforms or invest in technologies and requires a System-of-Systems approach to determine mission and system concepts of merit. An open-source modeling, analysis, and simulation software toolbox has been developed which enables rapid early-stage analysis and design of diverse fleets of on-orbit servicing platforms, with a specific emphasis on active debris removal applications. The toolbox provides fetching and processing of real-time orbital catalog data, clustering and scoring of high-value debris targets, flexible and efficient multi-vehicle multi-objective time-varying routing optimization, and fleet-level lifecycle cost estimation. The toolbox is applied to a diverse sample of promising commercial platforms to enable government decision-makers to make sound investment and acquisition decisions to support the development of ADR technologies, missions, and companies. </p>
253

Operationalizing FAccT : A Case Study at the Swedish Tax Agency / FAccT i praktiken : En fallstudie på Skatteverket

Jansson, Daniel, Strallhofer, Daniel January 2020 (has links)
Fairness, accountability and transparency (FAccT) in machine learning is an interdisciplinary area that concerns the design, development, deployment and maintenance of ethical AI and ML. Examples of research challenges in the field are detecting biased models, accountability issues that arise with systems that make decisions without human intervention or oversight, and the blackbox issues where decisions made by an AI system are untraceable. Whereas previous research within the FAccT domain typically only uses one perspective to investigate and research ethical AI, this paper takes the opposite approach of considering all three perspectives and uses them together to conduct a holistic case study. The aim of this paper is to provide tangible insights into how organizations can work with ethical AI and ML. The empirical evidence is gathered from the advanced data analytics (ADA) team at the Swedish Tax Agency in the form of interviews and quantitative data from a model developed by the team. Most notably, the quantitative and qualitative results show that: the data set used to train the model is biased, and there are risks with the current modus operandi due to (1) disagreeing views on accountability and (2) differences in literacy and understanding of ML and AI. Furthermore, this paper also features examples of how newly proposed frameworks such as SMACTR (a large scale AI systems audit framework), data sheets and model cards can be used by ADA in the development process to address these issues, and the potential benefits and caveats of the frameworks themselves. We also showcase how theoretical models such as Larssons 7 nuances of transparency and Bovens accountability framework can be applied in a practical setting and provide supporting evidence that shows their respective applicability. Finally, the implications of taking a collective approach to FAccT, the importance of ethics and transparency, and comparisons of different used frameworks are discussed. / Rättvisa, ansvarsskyldighet och transparens (eng. Fairness, accountability and transparency (FAccT) inom maskininlärning (ML) är ett tvärvetenskapligt område som berör designen, utvecklingen, implementeringen och underhållet av etisk AI och ML. Exempel på områdets forskningsutmaningar är att upptäcka partiska modeller, ansvarighetsfrågors som uppstår med system som fattar beslut utan mänskligt ingripande eller översikt, och black-box frågor där beslut som fattas av ett AI-system inte kan spåras. Medan tidigare forskning inom FAccT-domänen oftast använder ett av de tre tidigare nämnda perspektiven för att undersöka och utforska etisk AI tar denna artikel en motsatt strategi genom att beakta alla tre perspektiv för att tillsammans kunna genomföra en heltäckande fallstudie. Syftet med denna uppsats är ge konkreta insikter för hur organisationer kan arbeta med etisk AI och ML. De empiriska bevisen samlas in med hjälp av det avancerade dataanalysteamet (ADA) på Skatteverket via intervjuer. Kvantitativ data samlas även in från en modell som har utvecklats och används av ADA. De kvalitativa och kvantitativa resultaten visar att: datasetet som används för att träna modellen är partisk och det finns risker med den nuvarande modus operandi på grund av (1) oeniga åsikter om ansvarsskyldighet och (2) skillnader i läskunnighet och förståelse för AI och ML. Vidare så innehåller denna uppsats också exempel på hur nyligen utvecklade ramverk såsom SMACTR, datasheets och model cards kan användas av ADA i utvecklingsprocessen för att motverka dessa problem, samt de potentiella fördelarna och varningarna som ramverken har och ger. Vi visar även hur teoretiska modeller såsom Larssons 7 nyanser av transparens och Bovens ramverk för ansvarsskyldighet kan tillämpas i en praktisk miljö och ger underlag för deras respektive användbarhet. Slutligen diskuteras konsekvenserna av att ta en kollektiv inställning till FAccT, vikten av etik och transparens och en jämförelse av olika ramverk görs.
254

La performativité algorithmique : construction identitaire et interférence

Beaupré-Daignault, Alexis 08 1900 (has links)
Dans ce mémoire, nous tentons de déterminer si les SIA participent à la détérioration de la justice sociale. En ce sens, notre hypothèse est qu’une force performative émerge de la répétition des décisions algorithmiques, laquelle interfère avec la construction identitaire des individus. Selon le cadre d’analyse préconisé, lequel s’inscrit au sein de la conception de la justice d’Iris Marion Young, nous soutenons que l’interférence identitaire est injuste puisqu’elle contribue à l’oppression de l’impérialisme culturel. S’avérant, ce phénomène de performativité algorithmique expliquerait, d’une part, l’interaction entre l’identité et les algorithmes, et permettrait d’autre part son exploitation. Ultimement, nous soutenons que la performativité des algorithmes pourrait être manipulée afin qu’elle soit mise au service de la justice sociale. / In this thesis, we try to determine if artificial intelligence systems contribute to the deterioration of social justice. In this sense, our hypothesis is that a performative force emerges from the repetition of algorithmic decisions, which interferes with the identity construction of individuals. According to the recommended analytical framework, which is part of Iris Marion Young's conception of justice, we argue that identity interference is unjust because it contributes to the oppression of cultural imperialism. If it turns out, this phenomenon of algorithmic performativity would explain, on the one hand, the interaction between identity and algorithms, and on the other hand allow the exploitation of the latter. Ultimately, we argue that the performativity of algorithms could be manipulated so that it is put at the service of social justice.
255

A Study on Algorithmic Trading / En studie om algoritmisk aktiehandel

Hägg, Philip January 2023 (has links)
Algorithms have been used in finance since the early 2000s and accounted for 25% of the market around 2005. In this research, algorithms account for approximately 85% of the market. The challenge faced by many investors and fund managers is beating the Swedish market index OMXS30. This research investigates publicly available algorithms and their potential for implementation and modification to outperform the market. There is a lot of research done on the subject and most of the research found was mostly at a high academic level. Although few algorithms were found in the search, some algorithms that managed to beat other markets caught interest. The market data for this research was obtained from Nordnets closed API, specifically the historical price data of various financial securities. The algorithms use the historical price data to generate buy and sell signals which represents a trade. These trades were then used to calculate performance metrics such as the geometric mean and the sharpe ratio. The performance metrics are used to measure and compare performance with the OMXS30 using a quantitative method. On average, the algorithms did not perform well on the chosen securities, although some securities stood out in all cases. Beating the market is considered a difficult task, and this research reflects some of the challenges involved. The chosen method highlights the importance of the stocks the algorithms trade, emphasizing that stocks cannot be chosen randomly. Building a fully automated unsupervised trading system is challenging and requires extensive work. Some strategies tend to require human supervision to maximize returns and limit losses, while others yield low returns for low risk. / Algoritmer har använts inom finans sedan början av 2000-talet och utgjorde cirka 25% av marknaden runt 2005. När detta arbete utförs står algoritmer för cirka 85% av marknadsvolymen. Utmaningen som många investerare och fondförvaltare står inför är att slå den svenska marknadsindexet OMXS30. Detta arbete undersöker offentligt tillgängliga algoritmer och deras potential att implementeras och modifieras för att överträffa marknaden. Det finns mycket forskning gjord inom ämnet och majoriteten av denna forskning är på en hög akademisk nivå. Trots att få algoritmer hittades i sökningen, fanns det ett fåtal algoritmer som lyckats slå andra marknadsindex. Marknadsdata för denna forskning erhölls från Nordnets slutna API, specifikt historisk prisdata från olika finansiella värdepapper. Algoritmerna använder den historiska prisdatan för att generera köp- och säljsignaler. Dessa köp och säljsignaler användes sedan för att beräkna prestandamått som geometrisk medelvärde och riskjusterad avkastning. Prestandamåtten används för att mäta och jämföra prestanda med OMXS30 genom en kvantitativ metod. I genomsnitt presterade algoritmerna inte väl på de valda värdepappren, även om vissa värdepapper utmärkte sig i alla fall. Att slå marknaden anses vara en svår uppgift och denna forskning speglar några av de utmaningar som är involverade. Den valda metoden belyser vikten av de aktier som algoritmerna handlar med och betonar att aktier inte kan väljas slumpmässigt. Att bygga ett helt automatiserat obevakat handelssystem är utmanande och kräver omfattande arbete. Vissa strategier visade sig vara i behov av mänsklig övervakning för att maximera avkastningen och begränsa förluster, medan andra gav låg avkastning för låg risk.
256

Figuration for Piano and Electronic Sounds

Se Rin, Oh 29 September 2021 (has links)
No description available.
257

The Role of Algorithmic Decision Processes in Decision Automation: Three Case Studies

Durtschi, Blake Edward 15 March 2010 (has links) (PDF)
This thesis develops a new abstraction for solving problems in decision automation. Decision automation is the process of creating algorithms which use data to make decisions without the need for human intervention. In this abstraction, four key ideas/problems are highlighted which must be considered when solving any decision problem. These four problems are the decision problem, the learning problem, the model reduction problem, and the verification problem. One of the benefits of this abstraction is that a wide range of decision problems from many different areas can be broken down into these four “key” sub-problems. By focusing on these key sub-problems and the interactions between them, one can systematically arrive at a solution to the original problem. Three new learning platforms have been developed in the areas of portfolio optimization, business intelligence, and automated water management in order to demonstrate how this abstraction can be applied to three different types of problems. For the automated water management platform a full solution to the problem is developed using this abstraction. This yields an automated decision process which decides how much water to release from the Piute Reservoir into the Sevier River during an irrigation season. Another motivation for developing these learning platforms is that they can be used to introduce students of all disciplines to automated decision making.
258

[en] TECHNOMICS AND DEMOGRAMMAR: LAW AND TECHNICS IN THE NOMOS OF PLATFORMS / [pt] TECNOMIA E DEMOGRAMÁTICA: DIREITO E TÉCNICA NO NOMOS DAS PLATAFORMAS

JOSE ANTONIO REGO MAGALHAES 14 June 2021 (has links)
[pt] Esta pesquisa se pretende um movimento de abertura tanto no campo da teoria do direito quanto no dos estudos de direito e tecnologia, e em especial na intersecção entre esses campos. Ela busca construir uma teoria do direito e da sua relação com as técnicas que permita navegar a passagem contemporânea entre o que chamo de tecnomia moderna e uma tecnomia das plataformas ligada à emergência da compu-tação em escala global, da governança algorítmica e da crise climática. Para tanto, busco, em primeiro lugar, contribuir para uma teoria especulativa do direito (nem uma teoria interna do direito moderno como o conhecemos, nem uma teoria crí-tica/desconstrutiva do direito). Procuro fazê-lo lendo dois pensadores chave do direito moderno, Hans Kelsen e Carl Schmitt, como complementares e à luz de cor-rentes da chamada virada especulativa da filosofia contemporânea. Kelsen é lido como um aceleracionista/inumanista, e Scmitt à luz da cosmo/geontopolítica, en-quanto a filosofia dos agenciamentos de Deleuze e Guattari serve como o fundo pelo qual tudo isso se articula. Em segundo lugar, mobilizo esse aparato a fim de especular sobre uma tecnomia das plataformas. Construo conceitos tecnômicos de código, plataforma, dispositivo, aplicativo, interface e usuário. A demogramática algorítmica é definida como operando por captação massiva de dados, traçado de grafos e modulação de condutas. Procuro traçar algumas tendências na transição à tecnomia das plataformas, e.g. tendências à contingência das posições de pessoa e de coisa, à indistinção entre norma e viés, à não-instrumentalidade das técnicas, à pluralidade dos mundos, à sobreposição de nomias, à imbricação entre cognição e governo etc. Termino sugerindo três modelos/paradigmas especulativos para a na-vegação da tecnomia das plataformas – um modelo inumano, fundado na hipótese da inteligência geral; um paradigma animista, ligado à hipótese/mito de Gaia, e, por fim, uma tentativa de composição entre os dois. / [en] This thesis intends an opening move in the field of legal theory as well as in that of law and technology studies, and especially in the intersection of the two. It tries to construct a theory of law and of its relation to technology that allows for the navi-gation of the contemporary passage from what I call modern technomics to a plat-form technomics linked to the emergence of planetary scale computation, algorith-mic governance and the climate crisis. To do that, I first try to make a contribution to a speculative legal thery (neither an internal theory of modern law as we know it, nor a critical/deconstructive legal theory). I do so by reading two key thinkers of modern law, Hans Kelsen and Carl Schmitt, as complementary, and through cur-rents of the so-called speculative turn in contemporary theory. Kelsen is read as an accelerationist/inhumanist, and Schmitt in light of cosmo/geontopolitics, while Deleuze and Guattari s assemblage theory serves as a background through which the rest is connected. Secondly, I mobilize this conceptual apparatus to speculate about platform technomics. I build technomic concepts of code, platform, device, application, interface and user. Algorithmic demogrammar is defined as operating by massive data collection, tracing of graphs and modulation of conducts. I look to trace some tendencies in the transition to platform technomics, e.g. to the contin-gency of the positions of person and thing, to the indistinction between norm and bias, to the non-instrumentality of technics, to the plurality of worlds, to the super-imposition of nomoi, to the confusion of cognition and governance etc. I finish by offering three speculative models/paradigms for the navigation of platform tech-nomics – an inhuman model, based on the hypothesis of general intelligence; an animistic paradigm, linked to the Gaia hypothesis/myth, and, finally, a tentative composition between the two.
259

Algorithmic Trading and Prediction of Foreign Exchange Rates Based on the Option Expiration Effect / Algoritmisk handel och prediktion av valutakurser baserade på effekten av FX-optioners förfall

Mozayyan Esfahani, Sina January 2019 (has links)
The equity option expiration effect is a well observed phenomenon and is explained by delta hedge rebalancing and pinning risk, which makes the strike price of an option work as a magnet for the underlying price. The FX option expiration effect has not previously been explored to the same extent. In this paper the FX option expiration effect is investigated with the aim of finding out whether it provides valuable information for predicting FX rate movements. New models are created based on the concept of the option relevance coefficient that determines which options are at higher risk of being in the money or out of the money at a specified future time and thus have an attraction effect. An algorithmic trading strategy is created to evaluate these models. The new models based on the FX option expiration effect strongly outperform time series models used as benchmarks. The best results are obtained when the information about the FX option expiration effect is included as an exogenous variable in a GARCH-X model. However, despite promising and consistent results, more scientific research is required to be able to draw significant conclusions. / Effekten av aktieoptioners förfall är ett välobserverat fenomen, som kan förklaras av delta hedge-ombalansering och pinning-risk. Som följd av dessa fungerar lösenpriset för en option som en magnet för det underliggande priset. Effekten av FX-optioners förfall har tidigare inte utforskats i samma utsträckning. I denna rapport undersöks effekten av FX-optioners förfall med målet att ta reda på om den kan ge information som kan användas till prediktioner av FX-kursen. Nya modeller skapas baserat på konceptet optionsrelevanskoefficient som bestämmer huruvida optioner har en större sannolikhet att vara "in the money" eller "out of the money" vid en specificerad framtida tidpunkt och därmed har en attraktionseffekt. En algoritmisk tradingstrategi skapas för att evaluera dessa modeller. De nya modellerna baserade på effekten av FX-optioners förfall överpresterar klart jämfört med de tidsseriemodeller som användes som riktmärken. De bästa resultaten uppnåddes när informationen om effekten av FX-optioners förfall inkluderas som en exogen variabel i en GARCH-X modell. Dock, trots lovande och konsekventa resultat, behövs mer vetenskaplig forskning för att kunna dra signifikanta slutsatser.
260

Data Fusion and Text Mining for Supporting Journalistic Work

Zsombor, Vermes January 2022 (has links)
During the past several decades, journalists have been struggling with the ever growing amount of data on the internet. Investigating the validity of the sources or finding similar articles for a story can consume a lot of time and effort. These issues are even amplified by the declining size of the staff of news agencies. The solution is to empower the remaining professional journalists with digital tools created by computer scientists. This thesis project is inspired by an idea to provide software support for journalistic work with interactive visual interfaces and artificial intelligence. More specifically, within the scope of this thesis project, we created a backend module that supports several text mining methods such as keyword extraction, named entity recognition, sentiment analysis, fake news classification and also data collection from various data sources to help professionals in the field of journalism. To implement our system, first we gathered the requirements from several researchers and practitioners in journalism, media studies, and computer science, then acquired knowledge by reviewing literature on current approaches. Results are evaluated both with quantitative methods such as individual component benchmarks and also with qualitative methods by analyzing the outcomes of the semi-structured interviews with collaborating and external domain experts. Our results show that there is similarity between the domain experts' perceived value and the performance of the components on the individual evaluations. This shows us that there is potential in this research area and future work would be welcomed by the journalistic community.

Page generated in 0.0534 seconds