Spelling suggestions: "subject:"algorithmic"" "subject:"lgorithmic""
141 |
What do we talk about when we talk about algorithmic literacy? : A scoping reviewAugustinus, Melanie January 2022 (has links)
Problem formulation, goal and objectives: Algorithms are ubiquitous in digital society, yet complex to understand and often hidden. Algorithmic literacy can be a useful concept when educating and empowering users. However, it is not uniformly defined or used, and the state of knowledge is unclear. The aim of this thesis is to examine algorithmic literacy as a concept, what other concepts are associated, and what empirical and theoretical knowledge exists on this topic. Theory and method: Information literacy research serves as theoretical perspective, focusing on the role of evaluative and explorative approaches of research. The scoping review is chosen as method. Included are peer-reviewed journal articles, published in English from 2018 to 2022, from LISA, LISTA, ERIC ProQuest, and Scopus. Empirical results: Algorithmic literacy is often placed in information, media, and/or digital literacies. Closely related terms are attitude, agency, trust, and transparency. Four themes were identified: the necessity of algorithmic literacy, algorithm awareness as the basis, teaching and learning, and studying algorithmic literacy. Demographic and socioeconomic factors play a role: lower age and higher eduaction correlated with higher levels of algorithmic literacy. Algorithmic literacy is learned via personal experiences and formal education at all levels. Conclusions: Algorithmic literacy research would benefit from a limited number of terms used, and clearly defined terminology. The relationship between closely related concepts needs to be examined further. Librarians and educators should develop and share interventions at regional or national levels. Various knowledge gaps have been identified that may serve as future research agenda.
|
142 |
Algorithmic Mechanism Design for Data Replication ProblemsGuo, Minzhe 13 September 2016 (has links)
No description available.
|
143 |
Retail Facility Design Considering Product ExposureMowrey, Corinne H. 30 August 2016 (has links)
No description available.
|
144 |
Optimal design of VLSI structures with built-in self test based on reduced pseudo-exhaustive testingPimenta, Tales Cleber January 1992 (has links)
No description available.
|
145 |
The Algorithm Made Me a Le$bean : Algoritmic "Folk Theory" Within the Lesbian Community on TikTokReje Franzén, Fanny January 2022 (has links)
Based on the news coverage published 2020 that exposed practices of censorship and shadowbanning of LGBTQ+ creators implemented with the help of the TikTok algorithm the aim of this thesis was established (Ryan et al., 2020). The thesis analyze communication published on the Reddit forum r/actuallesbians regarding the lesbian community on TikTok to examine if there had been algorithmic “folk theories” created regarding the platform’s algorithm. This was done by performing a qualitative thematic content analysis of posts dedicated to the subject of TikTok’s algorithm and lesbian TikTok creators sourced from the Reddit forum. The data sample collected was 4 Reddit posts and their 116 subsequential comments published by 68 users. This analysis found that there were two main themes present in the user’s algorithmic theories; algorithm as it is and algorithm as you make it. Depending on the individuals’ beliefs regarding the algorithm it also affected their belief on the platforms ability to host a diverse lesbian community. The study found that the community had created opposing “folk theories” regarding the function of the algorithm and to what extent the community could create narratives representing the whole community. Depending on the “folk theory” the users were analyzed to fit into, they exhibited different behaviors and beliefs on how community could be created on the platform.
|
146 |
Matching Market for SkillsDelgado, Lisa A. January 2009 (has links)
This dissertation builds a model of information exchange, where the information is skills. A two-sided matching market for skills is employed that includes two distinct sides, skilled and unskilled agents, and the matches that connect these agents. The unskilled agents wish to purchase skills from the skilled agents, who each possess one valuable and unique skill. Skilled agents may match with many unskilled agents, while each unskilled agent may match with only one skilled agent. Direct interaction is necessary between the agents to teach and learn the skill. Thus, there must be mutual consent for a match to occur and the skill to be exchanged. In this market for skills, a discrete, simultaneous move game is employed where all agents announce their strategies at once, every skilled agent announcing a price and every unskilled agent announcing the skill she wishes to purchase. First, both Nash equilibria and a correlated equilibrium are determined for an example of this skills market game. Next, comparative statics are employed on this discrete, simultaneous move game through computer simulations. Finally, a continuous, simultaneous move game is studied where all agents announce their strategies at once, every skilled agent announcing a price and every unskilled agent announcing a skill and price pair. For this game, an algorithm is developed that if used by all agents to determine their strategies leads to a strong Nash equilibrium for the game. / Economics
|
147 |
Exploring the Landscape of Big Data Analytics Through Domain-Aware Algorithm DesignDash, Sajal 20 August 2020 (has links)
Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. High volume and velocity of the data warrant a large amount of storage, memory, and compute power while a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. In this thesis, we present our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental arrival of data through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems.
We present Claret, a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool, to demonstrate the application of the first guideline. It combines algorithmic concepts extended from the stochastic force-based multi-dimensional scaling (SF-MDS) and Glimmer. Claret computes approximate weighted Euclidean distances by combining a novel data mapping called stretching and Johnson Lindestrauss' lemma to reduce the complexity of WMDS from O(f(n)d) to O(f(n) log d). In demonstrating the second guideline, we map the problem of identifying multi-hit combinations of genetic mutations responsible for cancers to weighted set cover (WSC) problem by leveraging the semantics of cancer genomic data obtained from cancer biology. Solving the mapped WSC with an approximate algorithm, we identified a set of multi-hit combinations that differentiate between tumor and normal tissue samples. To identify three- and four-hits, which require orders of magnitude larger computational power, we have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. Developing new statistics to combine search results over time makes incremental analysis feasible. iBLAST performs (1+δ)/δ times faster than NCBI BLAST, where δ represents the fraction of database growth. We also explored various approaches to mitigate catastrophic forgetting in incremental training of deep learning models. / Doctor of Philosophy / Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. Here volume represents the data's size, variety represents various sources and formats of the data, and velocity represents the data arrival rate. High volume and velocity of the data warrant a large amount of storage, memory, and computational power. In contrast, a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. This thesis presents our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric (pair-wise distance and distribution-related) and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental data arrival through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property.
We demonstrate these three guidelines through the solution approaches of three representative domain problems. We demonstrate the application of the first guideline through the design and development of Claret. Claret is a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool that can reduce the dimension of high-dimensional data points. In demonstrating the second guideline, we identify combinations of cancer-causing gene mutations by mapping the problem to a well known computational problem known as the weighted set cover (WSC) problem. We have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer to solve the problem in less than two hours instead of an estimated hundred years. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. This analysis was made possible by developing new statistics to combine search results over time. We also explored various approaches to mitigate the catastrophic forgetting of deep learning models, where a model forgets to perform machine learning tasks efficiently on older data in a streaming setting.
|
148 |
Automated Exploration of Uncertain Deep Chemical Reaction NetworksMichael Woulfe (18863269) 24 July 2024 (has links)
<p><br></p><p dir="ltr">Algorithmic reaction explorations based on transition state searches can now routinely predict relatively short reaction sequences involving small molecules across a variety of chemical domains, including materials degradation, combustion chemistry, battery performance, and biomass conversion. Mature quantum chemistry tools can comprehensively characterize the reactivity of species with efficiency and broad coverage, but consecutive characterizations quickly encounter prohibitive costs of reactant proliferation, spurious characterization of irrelevant intermediates, and compounding uncertainties of quantum chemical calculations deep in a network. Application of these algorithms to deeper chemical reaction network (CRN) exploration still requires the development of more effective, comprehensive, and automated exploration policies. </p><p><br></p><p dir="ltr">This dissertation addresses the challenge of exploring deep chemical reaction networks (CRNs) in complex and chemically diverse systems by introducing Yet Another Kinetic Strategy (YAKS), an automated algorithm designed to minimize the computational costs of deep exploration and maximize coverage of important reaction channels. YAKS demonstrates that microkinetic simulations of the nascent network are cost-effective and able to iteratively build deep networks. Key features of the algorithm are the automatic incorporation of expanded elementary reaction steps, compatibility with short-lived but kinetically important species, and the incorporation of rate uncertainty into the exploration policy. The automatically induced expansion of reaction mechanisms gives YAKS access to important chemistries that other algorithms ignore, while also maintaining the ability to limit expensive forays into kinetically irrelevant regions of the CRN that would stymie previous methods. Instead of conducting a greedy exploration, YAKS biases network topography to probe beyond short-lived but kinetically important species, which enables YAKS to explore important endergonic reactions deep into the CRN. YAKS further induces rate uncertainty into an ensemble of microkinetic simulations, which positively influences intermediate prioritization deep in a network. </p><p><br></p><p dir="ltr">Algorithm effectiveness was validated in a case study of glucose pyrolysis, where the algorithm rediscovers reaction pathways previously discovered by heuristic exploration policies and also elucidates new reaction pathways to experimentally obtained products. The resulting CRN is the first to connect all major experimental pyrolysis products to glucose. Additional case studies are presented that investigate the role of reaction rules, rate uncertainty, and bimolecular reactions. These case studies show that na\"ive exponential growth estimates can vastly overestimate the actual number of kinetically relevant pathways in physical reaction networks. The excellent performance of YAKS demonstrates the ability of automated algorithmic methods to address the gaps outlined above.</p><p><br></p><p dir="ltr">The power of YAKS was then demonstrated on radically distinct chemistry from the validation study, chemical warfare agents (CWAs). Despite the almost uniform ban on the use of chemical agents and the widespread neutralization of stockpiles due to treaties, CWAs continue to pose a grave threat around the world. Rogue states, terrorist organizations, and lone wolf terrorists have all conducted CWA attacks within the past few decades. These circumstances make it necessary to prepare against and forensically evaluate the use of CWAs without direct experimentation. YAKS was applied to elucidate degradation reaction networks of three prominent CWAs, mustard gas (SM, HD), sarin (GB), and VX, and identified a range of possible degradant products of real world use cases. This dissertation also computationally interpreted the most common mechanism of action (MoA) associated with each CWA and examined their hydrolysis networks as a method to neutralize these agents. Additionally, agent stability was evaluated during extended microkinetic modeling in arid and humid scenarios, highlighting the potential for computational simulation approaches to fill a capability gap in the broader field of chemical defense. </p><p><br></p><p dir="ltr">This dissertation advanced automated CRN exploration, but considerable gaps remain. Future research directions include the accuracy gaps of both density functional theory and conformational sampling on energy calculations. Incorporation of machine learning (ML) methods can accelerate the costly reactivity characterization process, but ML models still require vast amounts of data. A recently released dataset comprehensively explored over 175,000 graphically defined reactions of moderately-sized C, H, O, and N containing molecules. While models trained on such data could readily be applied to glucose pyrolysis systems, chemical agents involve a much wider array of chemistry including Cl, S, P, and considerable quantities of radical and charged species. More comprehensive datasets are required to train a general ML model capable of accelerating geometry or energy calculations. Additionally, microkinetic modeling is hindered by software implementations that are unable to explore diverse chemistry such as multiphase reactions. In light of this, further improvements in exploration policies, reaction prediction algorithms, and simulation software make it feasible that CRNs might soon be routinely predictable in many additional contexts.</p>
|
149 |
Writing with Video GamesStinson, Samuel D. 01 October 2018 (has links)
No description available.
|
150 |
A hypermedia and project-based approach to music, sound and media artKoutsomichalis, Marinos G. January 2015 (has links)
This thesis describes my artistic practice as essentially project-based, site-responsive and hypermediating. Hypermediacy—i.e. the tendency of certain media or objects to keep their various constituents separate from their structure—is to be understood as opaque, juxtaposed and after a recurring contiguity with different kinds of interfaces. Accordingly, and within the context of the various projects that constitute this thesis, it is demonstrated how, in response to the particular places I work and to the various people I collaborate with, different kinds of materials and methodologies are incorporated in broader hybrids that are mediated (interfaced) in miscellaneous ways to this way result in original works of art. Materials and methodologies are shown to be intertwined and interdependent with each other as well as with the different ways in which they are interfaced, which accounts for an explicitly projectbased, rather than artwork-based, approach which, on its turn, de-emphasises the finished artefact in favour of process, performance, research and exploration. Projects are, then, shown to be explicitly site- or situation- responsive, as they are not implementations of preexistent ideas, but rather emerge as my original response to the particular sites, materials, people and the various other constituents that are involved in their very production. Interfaces to such hybrids as well as their very material and methodological elements are also shown to be hyper-mediated. It is finally argued that such an approach essentially accelerates multi-perspectivalism in that a project may spawn a number of diverse, typically medium-specific and/or site-specific, artworks that all exemplify different qualities which are congenital to the particular nature of each project.
|
Page generated in 0.039 seconds