• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 378
  • 64
  • 43
  • 26
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 606
  • 606
  • 276
  • 211
  • 208
  • 148
  • 133
  • 125
  • 92
  • 91
  • 88
  • 85
  • 78
  • 76
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Functional data mining with multiscale statistical procedures

Lee, Kichun 01 July 2010 (has links)
Hurst exponent and variance are two quantities that often characterize real-life, highfrequency observations. We develop the method for simultaneous estimation of a timechanging Hurst exponent H(t) and constant scale (variance) parameter C in a multifractional Brownian motion model in the presence of white noise based on the asymptotic behavior of the local variation of its sample paths. We also discuss the accuracy of the stable and simultaneous estimator compared with a few selected methods and the stability of computations that use adapted wavelet filters. Multifractals have become popular as flexible models in modeling real-life data of high frequency. We developed a method of testing whether the data of high frequency is consistent with monofractality using meaningful descriptors coming from a wavelet-generated multifractal spectrum. We discuss theoretical properties of the descriptors, their computational implementation, the use in data mining, and the effectiveness in the context of simulations, an application in turbulence, and analysis of coding/noncoding regions in DNA sequences. The wavelet thresholding is a simple and effective operation in wavelet domains that selects the subset of wavelet coefficients from a noised signal. We propose the selection of this subset in a semi-supervised fashion, in which a neighbor structure and classification function appropriate for wavelet domains are utilized. The decision to include an unlabeled coefficient in the model depends not only on its magnitude but also on the labeled and unlabeled coefficients from its neighborhood. The theoretical properties of the method are discussed and its performance is demonstrated on simulated examples.
72

Enhanced classification approach with semi-supervised learning for reliability-based system design

Patel, Jiten 02 July 2012 (has links)
Traditionally design engineers have used the Factor of Safety method for ensuring that designs do not fail in the field. Access to advanced computational tools and resources have made this process obsolete and new methods to introduce higher levels of reliability in an engineering systems are currently being investigated. However, even though high computational resources are available the computational resources required by reliability analysis procedures leave much to be desired. Furthermore, the regression based surrogate modeling techniques fail when there is discontinuity in the design space, caused by failure mechanisms, when the design is required to perform under severe externalities. Hence, in this research we propose efficient Semi-Supervised Learning based surrogate modeling techniques that will enable accurate estimation of a system's response, even under discontinuity. These methods combine the available set of labeled dataset and unlabeled dataset and provide better models than using labeled data alone. Labeled data is expensive to obtain since the responses have to be evaluated whereas unlabeled data is available in plenty, during reliability estimation, since the PDF information of uncertain variables is assumed to be known. This superior performance is gained by combining the efficiency of Probabilistic Neural Networks (PNN) for classification and Expectation-Maximization (EM) algorithm for treating the unlabeled data as labeled data with hidden labels.
73

Be More with Less: Scaling Deep-learning with Minimal Supervision

Yaqing Wang (12470301) 28 April 2022 (has links)
<p>  </p> <p>Large-scale deep learning models have reached previously unattainable performance for various tasks. However, the ever-growing resource consumption of neural networks generates large carbon footprint, brings difficulty for academics to engage in research and stops emerging economies from enjoying growing Artificial Intelligence (AI) benefits. To further scale AI to bring more benefits, two major challenges need to be solved. Firstly, even though large-scale deep learning models achieved remarkable success, their performance is still not satisfactory when fine-tuning with only a handful of examples, thereby hindering widespread adoption in real-world applications where a large scale of labeled data is difficult to obtain. Secondly, current machine learning models are still mainly designed for tasks in closed environments where testing datasets are highly similar to training datasets. When the deployed datasets have distribution shift relative to collected training data, we generally observe degraded performance of developed models. How to build adaptable models becomes another critical challenge. To address those challenges, in this dissertation, we focus on two topics: few-shot learning and domain adaptation, where few-shot learning aims to learn tasks with limited labeled data and domain adaption address the discrepancy between training data and testing data. In Part 1, we show our few-shot learning studies. The proposed few-shot solutions are built upon large-scale language models with evolutionary explorations from improving supervision signals, incorporating unlabeled data and improving few-shot learning abilities with lightweight fine-tuning design to reduce deployment costs. In Part 2, domain adaptation studies are introduced. We develop a progressive series of domain adaption approaches to transfer knowledge across domains efficiently to handle distribution shifts, including capturing common patterns across domains, adaptation with weak supervision and adaption to thousands of domains with limited labeled data and unlabeled data. </p>
74

Analyzing the performance of active learning strategies on machine learning problems

Werner, Vendela January 2023 (has links)
Digitalisation within industries is rapidly advancing and data possibilities are growing daily. Machine learning models need a large amount of data that are well-annotated for good performance. To get well-annotated data, an expert is needed, which is expensive, and the annotation itself could be very time-consuming. The performance of machine learning models is dependent on the size of the data set since a large amount of annotation is required for a good performance. Active learning has emerged as a solution to increase the size of the data through selective annotation. Instead of labelling data points at random, active learning strategies can be used to select data points based on informativeness or uncertainty. The challenge lies in determining the most effective active learning strategy for a combination of machine learning model and problem type. Although active learning has been around for a while, benchmarking strategies have not widely been explored. The aim of the thesis was to benchmark different AL strategies and analyse their performance on underlying ML problems and ML methods/models. For this purpose, an experiment was constructed to, in an unbiased way, compare different machine learning models in combination with different active learning strategies within the areas of computer vision, drug discovery, and natural language processing. Nine different active learning strategies were analysed in the thesis, with a random strategy working as the baseline, tested on six different machine learning methods/models. The result of this thesis was that active learning had a positive effect within all problem areas and especially worked well for unbalanced data. The two main conclusions are that all active learning strategies work better for a smaller budget due to the importance of selecting informative data points and that prediction-based strategies are the most successful for all problem types. / Föreställ dig möjligheten att ha ett verktyg för att bota en genetisk sjukdom. Idag finns data överallt, även ditt DNA anses vara fullt av värdefull information och mysterier redo att utforskas. I våra data finns det oändliga kopplingar och dolda relationer som inte ens det bästa mänskliga sinnet kan hitta och datorkraft har blivit en styrka att räkna med. Ett vinnande koncept har visat sig vara human-in-the-loop-programmering, där människa och dator arbetar tillsammans. Detta kallas inom maskininlärning för supervised learn- ing. Normalt sett kräver supervised learning en stor mängd data, och för mer komplexa uppgifter, en expert då feedback från en människa förväntas. Man kan se datorn som en detektiv och experten som dennes chef som pekar i rätt riktning. Riktningen pekas ut genom annotering av data, man berättar för datorn vilket svar som är rätt så att den lär sig ta ut särdrag. Exempelvis om man vill ha ett program som skiljer på hund från katt så kan det vara svårt att veta vad som är vad om man aldrig har sett ett djur innan. Båda har två öron, två ögon, fyra ben, och i många fall, även päls. En människa kan då berätta för datorn om det är en hund eller katt som syns på bilden och datorn kommer då börja lära sig se mönster och se utmärkande egenskaper. Att annotera data tar både lång tid och kostar mycket pengar. Vad gör man egentligen när mängden data är för liten, och/eller kostnaden för en expert blir för stor? Sam är en person med en sällsynt genetisk sjukdom. De har hört talas om ett program som bygger på supervised learning som kan ge förslag på vilken medicinsk behandling de kan pröva för att lindra sina symtom. På grund av den unika genetiska sjukdom som Sam har så finns det inte mycket data om detta, vilket gör att programvaran inte kommer fungera i Sams fall. Kom ihåg att supervised learning behöver mycket data som är väl annoterad för att ge pålitlig utdata. Hur ska programmeraren kunna hjälpa Sam? Med active learning såklart! Active learning är ett samlingsnamn för olika strategier som selekterar de mest informativa, eller osäkra datapunkterna att annotera. I stället för att exempelvis göra 2000 annoteringar kan en bättre prestanda åstadkommas med enbart 100. Skillnaden ligger i att det under supervised learning utan active learn- ing presenteras en färdig uppsättning av punkter för experten att annotera. Med active learning sker en interaktion för att välja ut punkter för annotering. Detta resulterar i en mer kostnadseffektiv inlärning som även presterar bra på ett litet data set. Detta exjobb har studerat prestationen av active learning inom läkemedelsbranschen och även prob- lem inom datorseende och språkteknologi. Resultatet gav att minst en av de applicerade active learning strategierna ledde till en förbättrad prestanda inom samtliga områden. Kanske kan vi i framtiden faktiskt använda active learning till att hjälpa personer som Sam och ha verktyget för att lösa mysteriet och bota dennes genetiska sjukdom.
75

Quality monitoring of projection welding using machine learning with small data sets

Koal, Johannes, Hertzschuch, Tim, Zschetzsche, Jörg, Füssel, Uwe 19 January 2024 (has links)
Capacitor discharge welding is an efficient, cost-effective and stable process. It is mostly used for projection welding. Real-time monitoring is desired to ensure quality. Until this point, measured process quantities were evaluated through expert systems. This method takes much time for developing, is strongly restricted to specific welding tasks and needs deep understanding of the process. Another possibility is quality prediction based on process data with machine learning. This method can overcome the downsides of expert systems. But it requires classified welding experiments to achieve a high prediction probability. In industrial manufacturing, it is rarely possible to generate big sets of this type of data. Therefore, semi-supervised learning will be investigated to enable model development on small data sets. Supervised learning is used to develop machine learning models on large amounts of data. These models are used as a comparison to the semi-supervised models. The time signals of the process parameters are evaluated in these investigations. A total of 389 classified weld tests were performed. With semi-supervised learning methods, the amount of training data necessary was reduced to 31 classified data sets.
76

From Pixels to Prices with ViTMAE : Integrating Real Estate Images through Masked Autoencoder Vision Transformers (ViTMAE) with Conventional Real Estate Data for Enhanced Automated Valuation / Från pixlar till priser med ViTMAE : Integrering av bostadsbilder genom Masked Autoencoder Vision Transformers (ViTMAE) med konventionell fastighetsdata för förbättrad automatiserad värdering

Ekblad Voltaire, Fanny January 2024 (has links)
The integration of Vision Transformers (ViTs) using Masked Autoencoder pre-training (ViTMAE) into real estate valuation is investigated in this Master’s thesis, addressing the challenge of effectively analyzing visual information from real estate images. This integration aims to enhance the accuracy and efficiency of valuation, a task traditionally dependent on realtor expertise. The research involved developing a model that combines ViTMAE-extracted visual features from real estate images with traditional property data. Focusing on residential properties in Sweden, the study utilized a dataset of images and metadata from online real estate listings. An adapted ViTMAE model, accessed via the Hugging Face library, was trained on the dataset for feature extraction, which was then integrated with metadata to create a comprehensive multimodal valuation model. Results indicate that including ViTMAE-extracted image features improves prediction accuracy in real estate valuation models. The multimodal approach, merging visual and traditional metadata, improved accuracy over metadata-only models. This thesis contributes to real estate valuation by showcasing the potential of advanced image processing techniques in enhancing valuation models. It lays the groundwork for future research in more refined holistic valuation models, incorporating a wider range of factors beyond visual data. / Detta examensarbete undersöker integrationen av Vision Transformers (ViTs) med Masked Autoencoder pre-training (ViTMAE) i bostadsvärdering, genom att addressera utmaningen att effektivt analysera visuell information från bostadsannonser. Denna integration syftar till att förbättra noggrannheten och effektiviteten i fastighetsvärdering, en uppgift som traditionellt är beroende av en fysisk besiktning av mäklare. Arbetet innefattade utvecklingen av en modell som kombinerar bildinformation extraherad med ViTMAE från fastighetsbilder med traditionella fastighetsdata. Med fokus på bostadsfastigheter i Sverige använde studien en databas med bilder och metadata från bostadsannonser. Den anpassade ViTMAE-modellen, tillgänglig via Hugging Face-biblioteket, tränades på denna databas för extraktion av bildinformation, som sedan integrerades med metadata för att skapa en omfattande värderingsmodell. Resultaten indikerar att inklusion av ViTMAE-extraherad bildinformation förbättrar noggranheten av bostadssvärderingsmodeller. Den multimodala metoden, som kombinerar visuell och traditionell metadata, visade en förbättring i noggrannhet jämfört med modeller som endast använder metadata. Denna uppsats bidrar till bostadsvärdering genom att visa på potentialen hos avancerade bildanalys för att förbättra värderingsmodeller. Den lägger grunden för framtida forskning i mer raffinerade holistiska värderingsmodeller som inkluderar ett bredare spektrum av faktorer utöver visuell data.
77

<b>A Study on the Use of Unsupervised, Supervised, and Semi-supervised Modeling for Jamming Detection and Classification in Unmanned Aerial Vehicles</b>

Margaux Camille Marie Catafort--Silva (18477354) 02 May 2024 (has links)
<p dir="ltr">In this work, first, unsupervised machine learning is proposed as a study for detecting and classifying jamming attacks targeting unmanned aerial vehicles (UAV) operating at a 2.4 GHz band. Three scenarios are developed with a dataset of samples extracted from meticulous experimental routines using various unsupervised learning algorithms, namely K-means, density-based spatial clustering of applications with noise (DBSCAN), agglomerative clustering (AGG) and Gaussian mixture model (GMM). These routines characterize attack scenarios entailing barrage (BA), single- tone (ST), successive-pulse (SP), and protocol-aware (PA) jamming in three different settings. In the first setting, all extracted features from the original dataset are used (i.e., nine in total). In the second setting, Spearman correlation is implemented to reduce the number of these features. In the third setting, principal component analysis (PCA) is utilized to reduce the dimensionality of the dataset to minimize complexity. The metrics used to compare the algorithms are homogeneity, completeness, v-measure, adjusted mutual information (AMI) and adjusted rank index (ARI). The optimum model scored 1.00, 0.949, 0.791, 0.722, and 0.791, respectively, allowing the detection and classification of these four jamming types with an acceptable degree of confidence.</p><p dir="ltr">Second, following a different study, supervised learning (i.e., random forest modeling) is developed to achieve a binary classification to ensure accurate clustering of samples into two distinct classes: clean and jamming. Following this supervised-based classification, two-class and three-class unsupervised learning is implemented considering three of the four jamming types: BA, ST, and SP. In this initial step, the four aforementioned algorithms are used. This newly developed study is intended to facilitate the visualization of the performance of each algorithm, for example, AGG performs a homogeneity of 1.0, a completeness of 0.950, a V-measure of 0.713, an ARI of 0.557 and an AMI of 0.713, and GMM generates 1, 0.771, 0.645, 0.536 and 0.644, respectively. Lastly, to improve the classification of this study, semi-supervised learning is adopted instead of unsupervised learning considering the same algorithms and dataset. In this case, GMM achieves results of 1, 0.688, 0.688, 0.786 and 0.688 whereas DBSCAN achieves 0, 0.036, 0.028, 0.018, 0.028 for homogeneity, completeness, V-measure, ARI and AMI respectively. Overall, this unsupervised learning is approached as a method for jamming classification, addressing the challenge of identifying newly introduced samples.</p>
78

Label-Efficient Visual Understanding with Consistency Constraints

Zou, Yuliang 24 May 2022 (has links)
Modern deep neural networks are proficient at solving various visual recognition and understanding tasks, as long as a sufficiently large labeled dataset is available during the training time. However, the progress of these visual tasks is limited by the number of manual annotations. On the other hand, it is usually time-consuming and error-prone to annotate visual data, rendering the challenge of scaling up human labeling for many visual tasks. Fortunately, it is easy to collect large-scale, diverse unlabeled visual data from the Internet. And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how to utilize the unlabeled data and synthetic labeled data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea is to encourage deep neural networks to produce consistent predictions across different transformations (\eg geometry, temporal, photometric, etc.). We organize the dissertation as follows. In Part I, we propose to use the consistency over different geometric formulations and a cycle consistency over time to tackle the low-level scene geometry perception tasks in a self-supervised learning setting. In Part II, we tackle the high-level semantic understanding tasks in a semi-supervised learning setting, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains with one single forward pass, without model training or optimization at the inference time. / Doctor of Philosophy / Recently, deep learning has emerged as one of the most powerful tools to solve various visual understanding tasks. However, the development of deep learning methods is significantly limited by the amount of manually labeled data. On the other hand, it is usually time-consuming and error-prone to annotate visual data, making the human labeling process not easily scalable. Fortunately, it is easy to collect large-scale, diverse raw visual data from the Internet (\eg search engines, YouTube, Instagram, etc.). And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how we can utilize the raw visual data and synthetic data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea behind this is to encourage deep neural networks to produce consistent predictions of the same visual input across different transformations (\eg geometry, temporal, photometric, etc.). We organize the dissertation as follows. In Part I, we propose using the consistency over different geometric formulations and a forward-backward cycle consistency over time to tackle the low-level scene geometry perception tasks, using unlabeled visual data only. In Part II, we tackle the high-level semantic understanding tasks using both a small amount of labeled data and a large amount of unlabeled data jointly, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains.
79

A Semi-Supervised Predictive Model to Link Regulatory Regions to Their Target Genes

Hafez, Dina Mohamed January 2015 (has links)
<p>Next generation sequencing technologies have provided us with a wealth of data profiling a diverse range of biological processes. In an effort to better understand the process of gene regulation, two predictive machine learning models specifically tailored for analyzing gene transcription and polyadenylation are presented.</p><p>Transcriptional enhancers are specific DNA sequences that act as ``information integration hubs" to confer regulatory requirements on a given cell. These non-coding DNA sequences can regulate genes from long distances, or across chromosomes, and their relationships with their target genes are not limited to one-to-one. With thousands of putative enhancers and less than 14,000 protein-coding genes, detecting enhancer-gene pairs becomes a very complex machine learning and data analysis challenge. </p><p>In order to predict these specific-sequences and link them to genes they regulate, we developed McEnhancer. Using DNAseI sensitivity data and annotated in-situ hybridization gene expression clusters, McEnhancer builds interpolated Markov models to learn enriched sequence content of known enhancer-gene pairs and predicts unknown interactions in a semi-supervised learning algorithm. Classification of predicted relationships were 73-98% accurate for gene sets with varying levels of initial known examples. Predicted interactions showed a great overlap when compared to Hi-C identified interactions. Enrichment of known functionally related TF binding motifs, enhancer-associated histone modification marks, along with corresponding developmental time point was highly evident.</p><p>On the other hand, pre-mRNA cleavage and polyadenylation is an essential step for 3'-end maturation and subsequent stability and degradation of mRNAs. This process is highly controlled by cis-regulatory elements surrounding the cleavage site (polyA site), which are frequently constrained by sequence content and position. More than 50\% of human transcripts have multiple functional polyA sites, and the specific use of alternative polyA sites (APA) results in isoforms with variable 3'-UTRs, thus potentially affecting gene regulation. Elucidating the regulatory mechanisms underlying differential polyA preferences in multiple cell types has been hindered by the lack of appropriate tests for determining APAs with significant differences across multiple libraries. </p><p>We specified a linear effects regression model to identify tissue-specific biases indicating regulated APA; the significance of differences between tissue types was assessed by an appropriately designed permutation test. This combination allowed us to identify highly specific subsets of APA events in the individual tissue types. Predictive kernel-based SVM models successfully classified constitutive polyA sites from a biologically relevant background (auROC = 99.6%), as well as tissue-specific regulated sets from each other. The main cis-regulatory elements described for polyadenylation were found to be a strong, and highly informative, hallmark for constitutive sites only. Tissue-specific regulated sites were found to contain other regulatory motifs, with the canonical PAS signal being nearly absent at brain-specific sites. We applied this model on SRp20 data, an RNA binding protein that might be involved in oncogene activation and obtained interesting insights. </p><p>Together, these two models contribute to the understanding of enhancers and the key role they play in regulating tissue-specific expression patterns during development, as well as provide a better understanding of the diversity of post-transcriptional gene regulation in multiple tissue types.</p> / Dissertation
80

Graph-based approaches for semi-supervised and cross-domain sentiment analysis

Ponomareva, Natalia January 2014 (has links)
The rapid development of Internet technologies has resulted in a sharp increase in the number of Internet users who create content online. User-generated content often represents people's opinions, thoughts, speculations and sentiments and is a valuable source of information for companies, organisations and individual users. This has led to the emergence of the field of sentiment analysis, which deals with the automatic extraction and classification of sentiments expressed in texts. Sentiment analysis has been intensively researched over the last ten years, but there are still many issues to be addressed. One of the main problems is the lack of labelled data necessary to carry out precise supervised sentiment classification. In response, research has moved towards developing semi-supervised and cross-domain techniques. Semi-supervised approaches still need some labelled data and their effectiveness is largely determined by the amount of these data, whereas cross-domain approaches usually perform poorly if training data are very different from test data. The majority of research on sentiment classification deals with the binary classification problem, although for many practical applications this rather coarse sentiment scale is not sufficient. Therefore, it is crucial to design methods which are able to perform accurate multiclass sentiment classification. The aims of this thesis are to address the problem of limited availability of data in sentiment analysis and to advance research in semi-supervised and cross-domain approaches for sentiment classification, considering both binary and multiclass sentiment scales. We adopt graph-based learning as our main method and explore the most popular and widely used graph-based algorithm, label propagation. We investigate various ways of designing sentiment graphs and propose a new similarity measure which is unsupervised, easy to compute, does not require deep linguistic analysis and, most importantly, provides a good estimate for sentiment similarity as proved by intrinsic and extrinsic evaluations. The main contribution of this thesis is the development and evaluation of a graph-based sentiment analysis system that a) can cope with the challenges of limited data availability by using semi-supervised and cross-domain approaches b) is able to perform multiclass classification and c) achieves highly accurate results which are superior to those of most state-of-the-art semi-supervised and cross-domain systems. We systematically analyse and compare semi-supervised and cross-domain approaches in the graph-based framework and propose recommendations for selecting the most pertinent learning approach given the data available. Our recommendations are based on two domain characteristics, domain similarity and domain complexity, which were shown to have a significant impact on semi-supervised and cross-domain performance.

Page generated in 0.0644 seconds