• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 179
  • 21
  • 18
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 308
  • 308
  • 120
  • 105
  • 79
  • 74
  • 72
  • 63
  • 62
  • 62
  • 56
  • 49
  • 46
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Unsupervised Image-to-image translation : Taking inspiration from human perception / Unsupervised Image-to-image translation : Taking inspiration from human perception

Sveding, Jens Jakob January 2021 (has links)
Generative Artificial Intelligence is a field of artificial intelligence where systems can learn underlying patterns in previously seen content and generate new content. This thesis explores a generative artificial intelligence technique used for image-toimage translations called Cycle-consistent Adversarial network (CycleGAN), which can translate images from one domain into another. The CycleGAN is a stateof-the-art technique for doing unsupervised image-to-image translations. It uses the concept of cycle-consistency to learn a mapping between image distributions, where the Mean Absolute Error function is used to compare images and thereby learn an underlying mapping between the two image distributions. In this work, we propose to use the Structural Similarity Index Measure (SSIM) as an alternative to the Mean Absolute Error function. The SSIM is a metric inspired by human perception, which measures the difference in two images by comparing the difference in, contrast, luminance, and structure. We examine if using the SSIM as the cycle-consistency loss in the CycleGAN will improve the image quality of generated images as measured by the Inception Score and Fréchet Inception Distance. The inception Score and Fréchet Inception Distance are both metrics that have been proposed as methods for evaluating the quality of images generated by generative adversarial networks (GAN). We conduct a controlled experiment to collect the quantitative metrics. Our results suggest that using the SSIM in the CycleGAN as the cycle-consistency loss will, in most cases, improve the image quality of generated images as measured Inception Score and Fréchet Inception Distance.
132

Entwicklung eines Monte-Carlo-Verfahrens zum selbständigen Lernen von Gauß-Mischverteilungen

Lauer, Martin 03 March 2005 (has links)
In der Arbeit wird ein neuartiges Lernverfahren für Gauß-Mischverteilungen entwickelt. Es basiert auf der Technik der Markov-Chain Monte-Carlo Verfahren und ist in der Lage, in einem Zuge die Größe der Mischverteilung sowie deren Parameter zu bestimmen. Das Verfahren zeichnet sich sowohl durch eine gute Anpassung an die Trainingsdaten als auch durch eine gute Generalisierungsleistung aus. Ausgehend von einer Beschreibung der stochastischen Grundlagen und einer Analyse der Probleme, die beim Lernen von Gauß-Mischverteilungen auftreten, wird in der Abeit das neue Lernverfahren schrittweise entwickelt und seine Eigenschaften untersucht. Ein experimenteller Vergleich mit bekannten Lernverfahren für Gauß-Mischverteilungen weist die Eignung des neuen Verfahrens auch empirisch nach.
133

Evaluating CNN-based models for unsupervised image denoising / En utvärdering av CNN-baserade metoder för icke-vägledd avbrusning av bilder

Lind, Johan January 2021 (has links)
Images are often corrupted by noise which reduces their visual quality and interferes with analysis. Convolutional Neural Networks (CNNs) have become a popular method for denoising images, but their training typically relies on access to thousands of pairs of noisy and clean versions of the same underlying picture. Unsupervised methods lack this requirement and can instead be trained purely using noisy images. This thesis evaluated two different unsupervised denoising algorithms: Noise2Self (N2S) and Parametric Probabilistic Noise2Void (PPN2V), both of which train an internal CNN to denoise images. Four different CNNs were tested in order to investigate how the performance of these algorithms would be affected by different network architectures. The testing used two different datasets: one containing clean images corrupted by synthetic noise, and one containing images damaged by real noise originating from the camera used to capture them. Two of the networks, UNet and a CBAM-augmented UNet resulted in high performance competitive with the strong classical denoisers BM3D and NLM. The other two networks - GRDN and MultiResUNet - on the other hand generally caused poor performance.
134

Practical Morphological Modeling: Insights from Dialectal Arabic

Erdmann, Alexander January 2020 (has links)
No description available.
135

Bayesian Test Analytics for Document Collections

Walker, Daniel David 15 November 2012 (has links) (PDF)
Modern document collections are too large to annotate and curate manually. As increasingly large amounts of data become available, historians, librarians and other scholars increasingly need to rely on automated systems to efficiently and accurately analyze the contents of their collections and to find new and interesting patterns therein. Modern techniques in Bayesian text analytics are becoming wide spread and have the potential to revolutionize the way that research is conducted. Much work has been done in the document modeling community towards this end,though most of it is focused on modern, relatively clean text data. We present research for improved modeling of document collections that may contain textual noise or that may include real-valued metadata associated with the documents. This class of documents includes many historical document collections. Indeed, our specific motivation for this work is to help improve the modeling of historical documents, which are often noisy and/or have historical context represented by metadata. Many historical documents are digitized by means of Optical Character Recognition(OCR) from document images of old and degraded original documents. Historical documents also often include associated metadata, such as timestamps,which can be incorporated in an analysis of their topical content. Many techniques, such as topic models, have been developed to automatically discover patterns of meaning in large collections of text. While these methods are useful, they can break down in the presence of OCR errors. We show the extent to which this performance breakdown occurs. The specific types of analyses covered in this dissertation are document clustering, feature selection, unsupervised and supervised topic modeling for documents with and without OCR errors and a new supervised topic model that uses Bayesian nonparametrics to improve the modeling of document metadata. We present results in each of these areas, with an emphasis on studying the effects of noise on the performance of the algorithms and on modeling the metadata associated with the documents. In this research we effectively: improve the state of the art in both document clustering and topic modeling; introduce a useful synthetic dataset for historical document researchers; and present analyses that empirically show how existing algorithms break down in the presence of OCR errors.
136

Presence detection by means of RF waveform classification

Lengdell, Max January 2022 (has links)
This master thesis investigates the possibility to automatically label and classify radio waves for presence detection, where the objective is to obtain information about the number of people in a room based on channel estimates. Labeling data for machine learning is time consuming and tedious process. To address this two approaches are evaluated. One was to develop a framework to generate labels with the aid of computer vision AI. The other relies on unsupervised learning classifiers complemented with heuristics to generate the labels. The investigation also studies the performance of the classifiers as a function of the TX/RX configuration, SNR, number of consecutive samples in a feature vector, bandwidth and frequency band. When someone moves in a room the propagation environment changes and induces variations in the channel estimates, compared to when the room is empty. These variations are the fundamental concept that is exploited in this thesis. Two methods are suggested to perform classification without the need of training data. The first uses random trees embeddings to construct a random forest without labels and the second using statistical bootstrapping with a random forest classifier. The labels used for annotation indicate whether were zero, one or two people in the room. The performance of binary and non-binary classification is evaluated both for the two blind detection models, as well as the performance of the unsupervised learning techniques Kmeans and self-organizing maps. For classification both supervised and unsupervised learning use random forest classifiers. Results show that random forest classifiers perform well for this kind of problem, and that random tree embeddings are able to extract relational data that could be used for automatic labeling of the data.
137

Conditional Noise-Contrastive Estimation : With Application to Natural Image Statistics / Uppskattning via betingat kontrastivt brus

Ceylan, Ciwan January 2017 (has links)
Unnormalised parametric models are an important class of probabilistic models which are difficult to estimate. The models are important since they occur in many different areas of application, e.g. in modelling of natural images, natural language and associative memory. However, standard maximum likelihood estimation is not applicable to unnormalised models, so alternative methods are required. Noise-contrastive estimation (NCE) has been proposed as an effective estimation method for unnormalised models. The basic idea is to transform the unsupervised estimation problem into a supervised classification problem. The parameters of the unnormalised model are learned by training the model to differentiate the given data samples from generated noise samples. However, the choice of the noise distribution has been left open to the user, and as the performance of the estimation may be sensitive to this choice, it is desirable for it to be automated. In this thesis, the ambiguity in the choice of the noise distribution is addressed by presenting the previously unpublished conditional noise-contrastive estimation (CNCE) method. Like NCE, CNCE estimates unnormalised models by classifying data and noise samples. However, the choice of noise distribution is partly automated via the use of a conditional noise distribution that is dependent on the data. In addition to introducing the core theory for CNCE, the method is empirically validated on data and models where the ground truth is known. Furthermore, CNCE is applied to natural image data to show its applicability in a realistic application. / Icke-normaliserade parametriska modeller utgör en viktig klass av svåruppskattade statistiska modeller. Dessa modeller är viktiga eftersom de uppträder inom många olika tillämpningsområden, t.ex. vid modellering av bilder, tal och skrift och associativt minne. Dessa modeller är svåruppskattade eftersom den vanliga maximum likelihood-metoden inte är tillämpbar på icke-normaliserade modeller. Noise-contrastive estimation (NCE) har föreslagits som en effektiv metod för uppskattning av icke-normaliserade modeller. Grundidén är att transformera det icke-handledda uppskattningsproblemet till ett handlett klassificeringsproblem. Den icke-normaliserade modellens parametrar blir inlärda genom att träna modellen på att skilja det givna dataprovet från ett genererat brusprov. Dock har valet av brusdistribution lämnats öppet för användaren. Eftersom uppskattningens prestanda är känslig gentemot det här valet är det önskvärt att få det automatiserat. I det här examensarbetet behandlas valet av brusdistribution genom att presentera den tidigare opublicerade metoden conditional noise-contrastive estimation (CNCE). Liksom NCE uppskattar CNCE icke-normaliserade modeller via klassificering av data- och brusprov. I det här fallet är emellertid brusdistributionen delvis automatiserad genom att använda en betingad brusdistribution som är beroende på dataprovet. Förutom att introducera kärnteorin för CNCE valideras även metoden med hjälp av data och modeller vars genererande parametrar är kända. Vidare appliceras CNCE på bilddata för att demonstrera dess tillämpbarhet.
138

Clustering Web Users by Mouse Movement to Detect Bots and Botnet Attacks

Morgan, Justin L 01 March 2021 (has links) (PDF)
The need for website administrators to efficiently and accurately detect the presence of web bots has shown to be a challenging problem. As the sophistication of modern web bots increases, specifically their ability to more closely mimic the behavior of humans, web bot detection schemes are more quickly becoming obsolete by failing to maintain effectiveness. Though machine learning-based detection schemes have been a successful approach to recent implementations, web bots are able to apply similar machine learning tactics to mimic human users, thus bypassing such detection schemes. This work seeks to address the issue of machine learning based bots bypassing machine learning-based detection schemes, by introducing a novel unsupervised learning approach to cluster users based on behavioral biometrics. The idea is that, by differentiating users based on their behavior, for example how they use the mouse or type on the keyboard, information can be provided for website administrators to make more informed decisions on declaring if a user is a human or a bot. This approach is similar to how modern websites require users to login before browsing their website; which in doing so, website administrators can make informed decisions on declaring if a user is a human or a bot. An added benefit of this approach is that it is a human observational proof (HOP); meaning that it will not inconvenience the user (user friction) with human interactive proofs (HIP) such as CAPTCHA, or with login requirements
139

MENTAL STRESS AND OVERLOAD DETECTION FOR OCCUPATIONAL SAFETY

Eskandar, Sahel January 2022 (has links)
Stress and overload are strongly associated with unsafe behaviour, which motivated various studies to detect them automatically in workplaces. This study aims to advance safety research by developing a data-driven stress and overload detection method. An unsupervised deep learning-based anomaly detection method is developed to detect stress. The proposed method performs with convolutional neural network encoder-decoder and long short-term memory equipped with an attention layer. Data from a field experiment with 18 participants was used to train and test the developed method. The field experiment was designed to include a pre-defined sequence of activities triggering mental and physical stress, while a wristband biosensor was used to collect physiological signals. The collected contextual and physiological data were pre-processed and then resampled into correlation matrices of 14 features. Correlation matrices are used as an input to the unsupervised Deep Learning (DL) based anomaly detection method. The developed method is validated, offering accuracy and F-measures close to 0.98. The technique employed captures the input data attributes correlation, promoting higher interpretability of the DL method for easier comprehension. Over-reliance on uncertain absolute truth, the need for a high number of training samples, and the requirement of a threshold for detecting anomalies are identified as shortcomings of the proposed method. To overcome these shortcomings, an Adaptive Neuro-Fuzzy Inference System (ANFIS) was designed and developed. While the ANFIS method did not improve the overall accuracy, it outperformed the DL-based method in detecting anomalies precisely. The overall performance of the ANFIS method is better than the DL-based method for the anomalous class, and the method results in lower false alarms. However, the DL-based method is suitable for circumstances where false alarms are tolerated. / Dissertation / Doctor of Philosophy (PhD)
140

Optimized material flow using unsupervised time series clustering : An experimental study on the just in time supermarket for Volvo powertrain production Skövde.

Darwish, Amena January 2019 (has links)
Machine learning has achieved remarkable performance in many domains, now it promising to solve manufacturing problems — a new ongoing trend of using machine learning in industrial applications. Dealing with the material order demand in manufacturing as time-series sequences, making unsupervised time-series clustering possible to apply. This study aims to evaluate different time-series clustering approaches, algorithms, and distance measures in material flow data. Three different approaches are evaluated; statistical clustering approaches; raw based and shape-based approaches and at last feature-based approach. The objectives are to categorize the materials in the supermarket (intermediate storage area to store materials before assembling the products) into three different flows according to their time-series properties. The experimental shows that feature-based approach is performed best for the data. A features filter is applied to keep the relevant features, that catch the unique characteristics from the data the predicted output. As a conclusion data type, structure, the goal of the clustering task and the application domains are reasons that have to consider when choosing the suitable clustering approach.

Page generated in 0.0575 seconds