• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 38
  • 27
  • 23
  • 12
  • 8
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 399
  • 194
  • 81
  • 74
  • 58
  • 54
  • 48
  • 47
  • 46
  • 45
  • 37
  • 33
  • 33
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Video Processing using multiplierless 2D-DCT with Algebraic Integers and MR-DCT

Nimmalapalli, Sushmabhargavi January 2018 (has links)
No description available.
182

The Elliptic Curve Method : A Modern Approach to Integer Factorization

Cao, Felix January 2023 (has links)
In this paper, we present a study of elliptic curves, focusing on theirunderlying mathematical concepts, properties and applications in numbertheory. We begin by introducing elliptic curves and their unique features,discussing their algebraic structure, and exploring their group law, pro-viding examples and geometric interpretations. The core of our studyfocuses on the Elliptic Curve Method (ECM) for integer factorization.We present the motivation behind ECM and compare it to Pollard’s (p-1) method. A discussion on pseudocurves and the choice of an ellipticcurve and bound B is provided. We also address the differences betweenECM and Pollard’s (p-1) method and propose optimization techniques forECM, including the calculation of the least common multiple (LCM) ofthe first B integers using the Sieve of Eratosthenes.
183

Complexity evaluation of CNNs in tightly coupled hybrid recommender systems / Komplexitetsanalys av faltningsnätverk i tätt kopplade hybridrekommendationssystem

Ingverud, Patrik January 2018 (has links)
In this report we evaluated how the complexity of a Convolutional Neural Network (CNN), in terms of number of filters, size of filters and dropout, affects the performance on the rating prediction accuracy in a tightly coupled hybrid recommender system. We also evaluated the effect on the rating prediction accuracy for pretrained CNNs in comparison to non-pretrained CNNs. We found that a less complex model, i.e. smaller filters and less number of filters, showed trends of better performance. Less regularization, in terms of dropout, had trends of better performance for the less complex models. Regarding the comparison of the pretrained models and non-pretrained models the experimental results were almost identical for the two denser datasets while pretraining had slightly worse performance on the sparsest dataset. / I denna rapport utvärderade vi komplexiteten på ett neuralt faltningsnätverk (eng. Convolutional Neural Network) i form av antal filter, storleken på filtren och regularisering, i form av avhopp (eng. dropout), för att se hur dessa hyperparametrar påverkade träffsäkerheten för rekommendationer i ett hybridrekommendationssystem. Vi utvärderade även hur förträning av det neurala faltningsnätverket påverkade träffsäkerheten för rekommendationer i jämförelse med ett icke förtränat neuralt faltningsnätverk. Resultaten visade trender på att en mindre komplex modell, det vill säga mindre och färre filter, gav bättre resultat. Även mindre regularisering, i form av avhopp, gav bättre resultat för mindre komplexa modeller. Gällande jämförelsen med förtränade modeller och icke förtränade modeller visade de experimentella resultaten nästan ingen skillnad för de två kompaktare dataseten medan förträning gav lite sämre resultat på det glesaste datasetet.
184

Trial Division : Improvements and Implementations / Trial Division : Förbättringar och Implementationer

Hedenström, Felix January 2017 (has links)
Trial division is possibly the simplest algorithm for factoring numbers.The problem with Trial division is that it is slow and wastes computationaltime on unnecessary tests of division. How can this simple algorithms besped up while still being serial? How does this algorithm behave whenparallelized? Can a superior serial and a parallel version be combined intoan even more powerful algorithm?To answer these questions the basics of trial divisions where researchedand improvements where suggested. These improvements where later im-plemented and tested by measuring the time it took to factorize a givennumber.A version using a list of primes and multiple threads turned out to bethe fastest for numbers larger than 10 10 , but was beaten when factoringlower numbers by its serial counterpart. A problem was detected thatcaused the parallel versions to have long allocation times which slowedthem down, but this did not hinder them much. / Trial division är en av de enklaste algoritmerna när det kommer till attfaktorisera tal. Problemet med trial division är att det är relativt långsamtoch att det gör onödiga beräkningar. Hur kan man göra denna algoritmsnabbare utan att inte göra den seriell? Hur beter sig algoritmen när denär parallelliserad? Kan en förbättrad seriell sedan bli parallelliserad?För att besvara dessa frågor studerades trial division och dess möjligaförbättringar. Dessa olika förbättringar implementerades i form av flerafunktioner som sedan och testades mot varandra.Den snabbaste versionen byggde på att använda en lista utav primtaloch trådar för att minimera antal ’trials’ samt att dela upp arbetet. Denvar dock inte alltid snabbast, då den seriella versionen som också användeen lista av primtal var snabbare för siffror under 10 10 . Sent upptäck-tes ett re-allokeringsproblem med de parallella implementationerna, meneftersom de ändå var snabbare fixades inte detta problemet.
185

Machine Learning Approaches to Historic Music Restoration

Coleman, Quinn 01 March 2021 (has links) (PDF)
In 1889, a representative of Thomas Edison recorded Johannes Brahms playing a piano arrangement of his piece titled “Hungarian Dance No. 1”. This recording acts as a window into how musical masters played in the 19th century. Yet, due to years of damage on the original recording medium of a wax cylinder, it was un-listenable by the time it was digitized into WAV format. This thesis presents machine learning approaches to an audio restoration system for historic music, which aims to convert this poor-quality Brahms piano recording into a higher quality one. Digital signal processing is paired with two machine learning approaches: non-negative matrix factorization and deep neural networks. Our results show the advantages and disadvantages of our approaches, when we compare them to a benchmark restoration of the same recording made by the Center for Computer Research in Music and Acoustics at Stanford University. They also show how this system provides the restoration potential for a wide range of historic music artifacts like this recording, requiring minimal overhead made possible by machine learning. Finally, we go into possible future improvements to these approaches.
186

Mining Structural and Functional Patterns in Pathogenic and Benign Genetic Variants through Non-negative Matrix Factorization

Peña-Guerra, Karla A 08 1900 (has links)
The main challenge in studying genetics has evolved from identifying variations and their impact on traits to comprehending the molecular mechanisms through which genetic variations affect human biology, including disease susceptibility. Despite having identified a vast number of variants associated with human traits through large scale genome wide association studies (GWAS) a significant portion of them still lack detailed insights into their underlying mechanisms [1]. Addressing this uncertainty requires the development of precise and scalable approaches to discover how genetic variation precisely influences phenotypes at a molecular level. In this study, we developed a pipeline to automate the annotation of structural variant feature effects. We applied this pipeline to a dataset of 33,942 variants from the ClinVar and GnomAD databases, which included both pathogenic and benign associations. To bridge the gap between genetic variation data and molecular phenotypes, I implemented Non-negative Matrix Factorization (NMF) on this large-scale dataset. This algorithm revealed 6 distinct clusters of variants with similar feature profiles. Among these groups, two exhibited a predominant presence of benign variants (accounting for 70% and 85% of the clusters), while one showed an almost equal distribution of pathogenic and benign variants. The remaining three groups were predominantly composed of pathogenic variants, comprising 68%, 83%, and 77% of the respective clusters. These findings revealed valuable insights into the underlying mechanisms contributing to pathogenicity. Further analysis of this dataset and the exploration of disease-related genes can enhance the accuracy of genetic diagnosis and therapeutic development through the direct inference of variants that are likely to affect the functioning of essential genes.
187

Evaluating, Understanding, and Mitigating Unfairness in Recommender Systems

Yao, Sirui 10 June 2021 (has links)
Recommender systems are information filtering tools that discover potential matchings between users and items and benefit both parties. This benefit can be considered a social resource that should be equitably allocated across users and items, especially in critical domains such as education and employment. Biases and unfairness in recommendations raise both ethical and legal concerns. In this dissertation, we investigate the concept of unfairness in the context of recommender systems. In particular, we study appropriate unfairness evaluation metrics, examine the relation between bias in recommender models and inequality in the underlying population, as well as propose effective unfairness mitigation approaches. We start with exploring the implication of fairness in recommendation and formulating unfairness evaluation metrics. We focus on the task of rating prediction. We identify the insufficiency of demographic parity for scenarios where the target variable is justifiably dependent on demographic features. Then we propose an alternative set of unfairness metrics that measured based on how much the average predicted ratings deviate from average true ratings. We also reduce these unfairness in matrix factorization (MF) models by explicitly adding them as penalty terms to learning objectives. Next, we target a form of unfairness in matrix factorization models observed as disparate model performance across user groups. We identify four types of biases in the training data that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which learns personalized regularization parameters that directly address the data biases. PRL poses the hyperparameter search problem as a secondary learning task. It enables back-propagation to learn the personalized regularization parameters by leveraging the closed-form solutions of alternating least squares (ALS) to solve MF. Furthermore, the learned parameters are interpretable and provide insights into how fairness is improved. Third, we conduct theoretical analysis on the long-term dynamics of inequality in the underlying population, in terms of the fitting between users and items. We view the task of recommendation as solving a set of classification problems through threshold policies. We mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we prove that a system with the formulated dynamics always has at least one equilibrium, and we provide sufficient conditions for the equilibrium to be unique. We also show that, depending on the item category relationships and the recommendation policies, recommendations in one item category can reshape the user-item fit in another item category. To summarize, in this research, we examine different fairness criteria in rating prediction and recommendation, study the dynamic of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality. / Doctor of Philosophy / Recommender systems are information filtering tools that discover potential matching between users and items. However, a recommender system, if not properly built, may not treat users and items equitably, which raises ethical and legal concerns. In this research, we explore the implication of fairness in the context of recommender systems, study the relation between unfairness in recommender output and inequality in the underlying population, and propose effective unfairness mitigation approaches. We start with finding unfairness metrics appropriate for recommender systems. We focus on the task of rating prediction, which is a crucial step in recommender systems. We propose a set of unfairness metrics measured as the disparity in how much predictions deviate from the ground truth ratings. We also offer a mitigation method to reduce these forms of unfairness in matrix factorization models Next, we look deeper into the factors that contribute to error-based unfairness in matrix factorization models and identify four types of biases that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which is a mitigation strategy that learns personalized regularization parameters to directly addresses data biases. The learned per-user regularization parameters are interpretable and provide insight into how fairness is improved. Third, we conduct a theoretical study on the long-term dynamics of the inequality in the fitting (e.g., interest, qualification, etc.) between users and items. We first mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we discuss the existence and uniqueness of system equilibrium as the one-step dynamics repeat. We also show that depending on the relation between item categories and the recommendation policies (unconstrained or fair), recommendations in one item category can reshape the user-item fit in another item category. In summary, we examine different fairness criteria in rating prediction and recommendation, study the dynamics of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality.
188

Sustainable Recipe Recommendation System: Evaluating the Performance of GPT Embeddings versus state-of-the-art systems

Bandaru, Jaya Shankar, Appili, Sai Keerthi January 2023 (has links)
Background: The demand for a sustainable lifestyle is increasing due to the need to tackle rapid climate change. One-third of carbon emissions come from the food industry; reducing emissions from this industry is crucial when fighting climate change. One of the ways to reduce carbon emissions from this industry is by helping consumers adopt sustainable eating habits by consuming eco-friendly food. To help consumers find eco-friendly recipes, we developed a sustainable recipe recommendation system that can recommend relevant and eco-friendly recipes to consumers using little information about their previous food consumption.  Objective: The main objective of this research is to identify (i) the appropriate recommendation algorithm suitable for a dataset that has few training and testing examples, and (ii) a technique to re-order the recommendation list such that a proper balance is maintained between relevance and carbon rating of the recipes. Method: We conducted an experiment to test the performance of a GPT embeddings-based recommendation system, Factorization Machines, and a version of a Graph Neural Network-based recommendation algorithm called PinSage for a different number of training examples and used ROC AUC value as our metric. After finding the best-performing model we experimented with different re-ordering techniques to find which technique provides the right balance between relevance and sustainability. Results: The results from the experiment show that the PinSage and Factorization Machines predict on average whether an item is relevant or not with 75% probability whereas GPT-embedding-based recommendation systems predict with only 55% probability. We also found the performance of PinSage and Factorization Machines improved as the training set size increased. For re-ordering, we found using a loga- rithmic combination of the relevance score and carbon rating of the recipe helped to reduce the average carbon rating of recommendations with a marginal reduction in the ROC AUC score.  Conclusion: The results show that the chosen state-of-the-art recommendation systems: PinSage and Factorization Machines outperform GPT-embedding-based recommendation systems by almost 1.4 times.
189

Hardware Implementation of Error Control Decoders

Chen, Bainan 02 June 2008 (has links)
No description available.
190

Efficient VLSI Architectures for Algebraic Soft-decision Decoding of Reed-Solomon Codes

Zhu, Jiangli 26 May 2011 (has links)
No description available.

Page generated in 0.1005 seconds