• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 33
  • 26
  • 22
  • 15
  • 7
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 336
  • 68
  • 61
  • 52
  • 40
  • 39
  • 38
  • 36
  • 34
  • 30
  • 29
  • 28
  • 28
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

What do we talk about when we talk about algorithmic literacy? : A scoping review

Augustinus, Melanie January 2022 (has links)
Problem formulation, goal and objectives: Algorithms are ubiquitous in digital society, yet complex to understand and often hidden. Algorithmic literacy can be a useful concept when educating and empowering users. However, it is not uniformly defined or used, and the state of knowledge is unclear. The aim of this thesis is to examine algorithmic literacy as a concept, what other concepts are associated, and what empirical and theoretical knowledge exists on this topic. Theory and method: Information literacy research serves as theoretical perspective, focusing on the role of evaluative and explorative approaches of research. The scoping review is chosen as method. Included are peer-reviewed journal articles, published in English from 2018 to 2022, from LISA, LISTA, ERIC ProQuest, and Scopus. Empirical results: Algorithmic literacy is often placed in information, media, and/or digital literacies. Closely related terms are attitude, agency, trust, and transparency. Four themes were identified: the necessity of algorithmic literacy, algorithm awareness as the basis, teaching and learning, and studying algorithmic literacy. Demographic and socioeconomic factors play a role: lower age and higher eduaction correlated with higher levels of algorithmic literacy. Algorithmic literacy is learned via personal experiences and formal education at all levels. Conclusions: Algorithmic literacy research would benefit from a limited number of terms used, and clearly defined terminology. The relationship between closely related concepts needs to be examined further. Librarians and educators should develop and share interventions at regional or national levels. Various knowledge gaps have been identified that may serve as future research agenda.
142

Algorithmic Mechanism Design for Data Replication Problems

Guo, Minzhe 13 September 2016 (has links)
No description available.
143

Retail Facility Design Considering Product Exposure

Mowrey, Corinne H. 30 August 2016 (has links)
No description available.
144

Optimal design of VLSI structures with built-in self test based on reduced pseudo-exhaustive testing

Pimenta, Tales Cleber January 1992 (has links)
No description available.
145

The Algorithm Made Me a Le$bean : Algoritmic "Folk Theory" Within the Lesbian Community on TikTok

Reje Franzén, Fanny January 2022 (has links)
Based on the news coverage published 2020 that exposed practices of censorship and shadowbanning of LGBTQ+ creators implemented with the help of the TikTok algorithm the aim of this thesis was established (Ryan et al., 2020). The thesis analyze communication published on the Reddit forum r/actuallesbians regarding the lesbian community on TikTok to examine if there had been algorithmic “folk theories” created regarding the platform’s algorithm. This was done by performing a qualitative thematic content analysis of posts dedicated to the subject of TikTok’s algorithm and lesbian TikTok creators sourced from the Reddit forum. The data sample collected was 4 Reddit posts and their 116 subsequential comments published by 68 users. This analysis found that there were two main themes present in the user’s algorithmic theories; algorithm as it is and algorithm as you make it. Depending on the individuals’ beliefs regarding the algorithm it also affected their belief on the platforms ability to host a diverse lesbian community. The study found that the community had created opposing “folk theories” regarding the function of the algorithm and to what extent the community could create narratives representing the whole community. Depending on the “folk theory” the users were analyzed to fit into, they exhibited different behaviors and beliefs on how community could be created on the platform.
146

Matching Market for Skills

Delgado, Lisa A. January 2009 (has links)
This dissertation builds a model of information exchange, where the information is skills. A two-sided matching market for skills is employed that includes two distinct sides, skilled and unskilled agents, and the matches that connect these agents. The unskilled agents wish to purchase skills from the skilled agents, who each possess one valuable and unique skill. Skilled agents may match with many unskilled agents, while each unskilled agent may match with only one skilled agent. Direct interaction is necessary between the agents to teach and learn the skill. Thus, there must be mutual consent for a match to occur and the skill to be exchanged. In this market for skills, a discrete, simultaneous move game is employed where all agents announce their strategies at once, every skilled agent announcing a price and every unskilled agent announcing the skill she wishes to purchase. First, both Nash equilibria and a correlated equilibrium are determined for an example of this skills market game. Next, comparative statics are employed on this discrete, simultaneous move game through computer simulations. Finally, a continuous, simultaneous move game is studied where all agents announce their strategies at once, every skilled agent announcing a price and every unskilled agent announcing a skill and price pair. For this game, an algorithm is developed that if used by all agents to determine their strategies leads to a strong Nash equilibrium for the game. / Economics
147

Exploring the Landscape of Big Data Analytics Through Domain-Aware Algorithm Design

Dash, Sajal 20 August 2020 (has links)
Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. High volume and velocity of the data warrant a large amount of storage, memory, and compute power while a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. In this thesis, we present our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental arrival of data through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We present Claret, a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool, to demonstrate the application of the first guideline. It combines algorithmic concepts extended from the stochastic force-based multi-dimensional scaling (SF-MDS) and Glimmer. Claret computes approximate weighted Euclidean distances by combining a novel data mapping called stretching and Johnson Lindestrauss' lemma to reduce the complexity of WMDS from O(f(n)d) to O(f(n) log d). In demonstrating the second guideline, we map the problem of identifying multi-hit combinations of genetic mutations responsible for cancers to weighted set cover (WSC) problem by leveraging the semantics of cancer genomic data obtained from cancer biology. Solving the mapped WSC with an approximate algorithm, we identified a set of multi-hit combinations that differentiate between tumor and normal tissue samples. To identify three- and four-hits, which require orders of magnitude larger computational power, we have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. Developing new statistics to combine search results over time makes incremental analysis feasible. iBLAST performs (1+δ)/δ times faster than NCBI BLAST, where δ represents the fraction of database growth. We also explored various approaches to mitigate catastrophic forgetting in incremental training of deep learning models. / Doctor of Philosophy / Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. Here volume represents the data's size, variety represents various sources and formats of the data, and velocity represents the data arrival rate. High volume and velocity of the data warrant a large amount of storage, memory, and computational power. In contrast, a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. This thesis presents our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric (pair-wise distance and distribution-related) and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental data arrival through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We demonstrate the application of the first guideline through the design and development of Claret. Claret is a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool that can reduce the dimension of high-dimensional data points. In demonstrating the second guideline, we identify combinations of cancer-causing gene mutations by mapping the problem to a well known computational problem known as the weighted set cover (WSC) problem. We have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer to solve the problem in less than two hours instead of an estimated hundred years. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. This analysis was made possible by developing new statistics to combine search results over time. We also explored various approaches to mitigate the catastrophic forgetting of deep learning models, where a model forgets to perform machine learning tasks efficiently on older data in a streaming setting.
148

Writing with Video Games

Stinson, Samuel D. 01 October 2018 (has links)
No description available.
149

A hypermedia and project-based approach to music, sound and media art

Koutsomichalis, Marinos G. January 2015 (has links)
This thesis describes my artistic practice as essentially project-based, site-responsive and hypermediating. Hypermediacy—i.e. the tendency of certain media or objects to keep their various constituents separate from their structure—is to be understood as opaque, juxtaposed and after a recurring contiguity with different kinds of interfaces. Accordingly, and within the context of the various projects that constitute this thesis, it is demonstrated how, in response to the particular places I work and to the various people I collaborate with, different kinds of materials and methodologies are incorporated in broader hybrids that are mediated (interfaced) in miscellaneous ways to this way result in original works of art. Materials and methodologies are shown to be intertwined and interdependent with each other as well as with the different ways in which they are interfaced, which accounts for an explicitly projectbased, rather than artwork-based, approach which, on its turn, de-emphasises the finished artefact in favour of process, performance, research and exploration. Projects are, then, shown to be explicitly site- or situation- responsive, as they are not implementations of preexistent ideas, but rather emerge as my original response to the particular sites, materials, people and the various other constituents that are involved in their very production. Interfaces to such hybrids as well as their very material and methodological elements are also shown to be hyper-mediated. It is finally argued that such an approach essentially accelerates multi-perspectivalism in that a project may spawn a number of diverse, typically medium-specific and/or site-specific, artworks that all exemplify different qualities which are congenital to the particular nature of each project.
150

Fluid Queues: Building Upon the Analogy with QBD processes

da Silva Soares, Ana 11 March 2005 (has links)
Les files d'attente fluides sont des processus markoviens à deux dimensions, où la première composante, appelée le niveau, représente le contenu d'un réservoir et prend des valeurs continues, et la deuxième composante, appelée la phase, est l'état d'un processus markovien dont l'évolution contrôle celle du niveau. Le niveau de la file fluide varie linéairement avec un taux qui dépend de la phase et qui peut prendre n'importe quelle valeur réelle. Dans cette thèse, nous explorons le lien entre les files fluides et les processus QBD, et nous appliquons des arguments utilisés en théorie des processus de renouvellement pour obtenir la distribution stationnaire de plusieurs modèles fluides. Nous commençons par l'étude d'une file fluide avec un réservoir de taille infinie; nous déterminons sa distribution stationnaire, et nous présentons un algorithme permettant de calculer cette distribution de manière très efficace. Nous observons que la distribution stationnaire de la file fluide de capacité infinie est très semblable à celle d'un processus QBD avec une infinité de niveaux. Nous poursuivons la recherche des similarités entre les files fluides et les processus QBD, et nous étudions ensuite la distribution stationnaire d'une file fluide de capacité finie. Nous montrons que l'algorithme valable pour le cas du réservoir infini permet de calculer toutes les quantités importantes du modèle avec un réservoir fini. Nous considérons ensuite des modèles fluides plus complexes, de capacité finie ou infinie, où le comportement du processus markovien des phases peut changer lorsque le niveau du réservoir atteint certaines valeurs seuils. Nous montrons que les méthodes développées pour des modèles classiques s'étendent de manière naturelle à ces modèles plus complexes. Pour terminer, nous étudions les conditions nécessaires et suffisantes qui mènent à l'indépendance du niveau et de la phase d'une file fluide de capacité infinie en régime stationnaire. Ces résultats s'appuient sur des résultats semblables concernant des processus QBD. Markov modulated fluid queues are two-dimensional Markov processes, of which the first component, called the level, represents the content of a buffer or reservoir and takes real values; the second component, called the phase, is the state of a Markov process which controls the evolution of the level in the following manner: the level varies linearly at a rate which depends on the phase and which can take any real value. In this thesis, we explore the link between fluid queues and Quasi Birth-and-Death (QBD) processes, and we apply Markov renewal techniques in order to derive the stationary distribution of various fluid models. To begin with, we study a fluid queue with an infinite capacity buffer; we determine its stationary distribution and we present an algorithm which performs very efficiently in the determination of this distribution. We observe that the equilibrium distribution of the fluid queue is very similar to that of a QBD process with infinitely many levels. We further exploit the similarity between the two processes, and we determine the stationary distribution of a finite capacity fluid queue. We show that the algorithm available in the infinite case allows for the computation of all the important quantities entering in the expression of this distribution. We then consider more complex models, of either finite or infinite capacities, in which the behaviour ff the phase process may change whenever the buffer is empty or full, or when it reaches certain thresholds. We show that the techniques that we develop for the simpler models can be extended quite naturally in this context. Finally, we study the necessary and sufficient conditions that lead to the independence between the level and the phase of an infinite capacity fluid queue in the stationary regime. These results are based on similar developments for QBD processes.

Page generated in 0.0751 seconds