• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 6
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 107
  • 71
  • 70
  • 63
  • 63
  • 48
  • 45
  • 44
  • 41
  • 39
  • 37
  • 35
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Noise Reduction in Flash X-ray Imaging Using Deep Learning

Sundman, Tobias January 2018 (has links)
Recent improvements in deep learning architectures, combined with the strength of modern computing hardware such as graphics processing units, has lead to significant results in the field of image analysis. In this thesis work, locally connected architectures are employed to reduce noise in flash X-ray diffraction images. The layers in these architectures use convolutional kernels, but without shared weights. This combines the benefits of lower model memory footprint in convolutional networks with the higher model capacity of fully connected networks. Since the camera used to capture the diffraction images has pixelwise unique characteristics, and thus lacks equivariance, this compromise can be beneficial. The background images of this thesis work were generated with an active laser but without injected samples. Artificial diffraction patterns were then added to these background images allowing for training U-Net architectures to separate them. Architecture A achieved a performance of 0.187 on the test set, roughly translating to 35 fewer photon errors than a model similar to state of the art. After smoothing the photon errors this performance increased to 0.285, since the U-Net architectures managed to remove flares where state of the art could not. This could be taken as a proof of concept that locally connected networks are able to separate diffraction from background in flash X-Ray imaging.
132

Influencing the Properties of Latent Spaces

Zumer, Jeremie 08 1900 (has links)
No description available.
133

SOLVING PREDICTION PROBLEMS FROM TEMPORAL EVENT DATA ON NETWORKS

Hao Sha (11048391) 06 August 2021 (has links)
<div><div><div><p>Many complex processes can be viewed as sequential events on a network. In this thesis, we study the interplay between a network and the event sequences on it. We first focus on predicting events on a known network. Examples of such include: modeling retweet cascades, forecasting earthquakes, and tracing the source of a pandemic. In specific, given the network structure, we solve two types of problems - (1) forecasting future events based on the historical events, and (2) identifying the initial event(s) based on some later observations of the dynamics. The inverse problem of inferring the unknown network topology or links, based on the events, is also of great important. Examples along this line include: constructing influence networks among Twitter users from their tweets, soliciting new members to join an event based on their participation history, and recommending positions for job seekers according to their work experience. Following this direction, we study two types of problems - (1) recovering influence networks, and (2) predicting links between a node and a group of nodes, from event sequences.</p></div></div></div>
134

Automatická klasifikace obrazů / Automatic image classification

Ševčík, Zdeněk January 2020 (has links)
The aim of this thesis is to explore clustering algorithms of machine unsupervised learning, which can be used for image database classification by similarity. For chosen clustering algorithms is written up a theoretical basis. For better classification of used database this thesis deals with different methods of image preprocessing. With these methods the features from image are extracted. Next the thesis solves of implementation of preprocessing methods and practical application of clustering algorithms. In practical part is programmed aplication in Python programming language, which classifies the database of images into classes by similarity. The thesis tests all of used methods and at the end of the thesis is processed searches of results.
135

Hluboké neuronové sítě / Deep Neural Networks

Habrnál, Matěj January 2014 (has links)
The thesis addresses the topic of Deep Neural Networks, in particular the methods regar- ding the field of Deep Learning, which is used to initialize the weight and learning process s itself within Deep Neural Networks. The focus is also put to the basic theory of the classical Neural Networks, which is important to comprehensive understanding of the issue. The aim of this work is to determine the optimal set of optional parameters of the algori- thms on various complexity levels of image recognition tasks through experimenting with created application applying Deep Neural Networks. Furthermore, evaluation and analysis of the results and lessons learned from the experimentation with classical and Deep Neural Networks are integrated in the thesis.
136

Nonlinear Methods of Aerodynamic Data-driven Reduced Order Modeling

Forsberg, Arvid January 2022 (has links)
Being able to accurately approximate outputs of computationally expensive simulations for arbitrary input parameters, also called missing points estimation, is central in many different areas of research and development with applications ranging from uncertainty propagation to control system design to name a few. This project investigates the potential of kernel transformations and nonlinear autoencoders as methods of improving the accuracy of the proper orthogonal decomposition method combined with regression. The techniques are applied on aerodynamic pressure CFD data around airplane wings in both two- and three-dimensional settings. The novel methods show potential in select situations, but cannot at this stage be generally considered superior. Their performances are similar although the procedure of design and training of a nonlinear autoencoder is less straight forward and more time demanding than using kernel transformations. The results demonstrate the regression bottleneck of the proper orthogonal decomposition method, which partially is improved with the new methods. Future studies should focus on adapting the autoencoder training strategy to the architecture and data as well as improving the regression stage of all methods.
137

Online Non-linear Prediction of Financial Time Series Patterns

da Costa, Joel 11 September 2020 (has links)
We consider a mechanistic non-linear machine learning approach to learning signals in financial time series data. A modularised and decoupled algorithm framework is established and is proven on daily sampled closing time-series data for JSE equity markets. The input patterns are based on input data vectors of data windows preprocessed into a sequence of daily, weekly and monthly or quarterly sampled feature measurement changes (log feature fluctuations). The data processing is split into a batch processed step where features are learnt using a Stacked AutoEncoder (SAE) via unsupervised learning, and then both batch and online supervised learning are carried out on Feedforward Neural Networks (FNNs) using these features. The FNN output is a point prediction of measured time-series feature fluctuations (log differenced data) in the future (ex-post). Weight initializations for these networks are implemented with restricted Boltzmann machine pretraining, and variance based initializations. The validity of the FNN backtest results are shown under a rigorous assessment of backtest overfitting using both Combinatorially Symmetrical Cross Validation and Probabilistic and Deflated Sharpe Ratios. Results are further used to develop a view on the phenomenology of financial markets and the value of complex historical data under unstable dynamics.
138

Evaluation of Multi-Platform LiDAR-Based Leaf Area Index Estimates Over Row Crops

Behrokh Nazeri (10233353) 05 March 2021 (has links)
<div>Leaf Area Index (LAI) is an important variable for both for characterizing plant canopy and as an input to many crop models. It is a dimensionless quantity broadly defined as the total one-sided leaf area per unit ground area, and is estimated over agriculture row crops by both direct and indirect methods. Direct methods, which involve destructive sampling, are laborious and time-consuming, while indirect methods such as remote sensing-based approaches have multiple sources of uncertainty. LiDAR (Light Detection and Ranging) remotely sensed data acquired from manned aircraft and UAVs’ have been investigated to estimate LAI based on physical/geometric features such as canopy gap fraction. High-resolution point cloud data acquired with a laser scanner from any platform, including terrestrial laser scanning and mobile mapping systems, contain random noise and outliers. Therefore, outlier detection in LiDAR data is often useful prior to analysis. Applications in agriculture are particularly challenging, as there is typically no prior knowledge of the statistical distribution of points, description of plant complexity, and local point densities, which are crop dependent. This dissertation first explores the effectiveness of using LiDAR data to estimate LAI for row crop plants at multiple times during the growing season from both a wheeled vehicle and an Unmanned Aerial Vehicle (UAV). Linear and nonlinear regression models are investigated for prediction utilizing statistical and plant structure-based features extracted from the LiDAR point cloud data and ground reference obtained from an in-field plant canopy analyzer and leaf area derived from destructive sampling. LAI estimates obtained from support vector regression (SVR) models with a radial basis function (RBF) kernel developed using the wheel-based LiDAR system and UAVs are promising, based on the value of the coefficient of determination (R2) and root mean squared error (RMSE) of the residuals. </div><div>This dissertation also investigates approaches to minimize the impact of outliers on discrete return LiDAR acquired over crops, and specifically for sorghum and maize breeding experiments, by an unmanned aerial vehicle (UAV) and a wheel-based ground platform. Two methods are explored to detect and remove the outliers from the plant datasets. The first is based on surface fitting to noisy point cloud data based on normal and curvature estimation in a local neighborhood. The second utilizes the deep learning framework PointCleanNet. Both methods are applied to individual plants and field-based datasets. To evaluate the method, an F-score and LAI are calculated both before and after outlier removal for both scenarios. Results indicate that the deep learning method for outlier detection is more robust to changes in point densities, level of noise, and shapes. Also, the predicted LAI was improved for the wheel-based vehicle data based on the R2 value and RMSE of residuals. </div><div>The quality of the extracted features depends on the point density and laser penetration of the canopy. Extracting appropriate features is a critical step to have accurate prediction models. Deep learning frameworks are increasingly being used in remote sensing applications. In the last objective of this study, a feature extraction approach is investigated for encoding LiDAR data acquired by UAV platforms multiple times during the growing season over sorghum and maize plant breeding experiments. LAI estimates obtained with these inputs are used to develop support vector regression (SVR) models using plant canopy analyzer data as the ground reference. Results are compared to models based on estimates from physically-based features and evaluated in terms of the coefficient determination (R2). The effects of experimental conditions, including flying height, sensor characteristics, and crop type, are also investigated relative to the estimates of LAI.</div><div><br></div>
139

Anomaly detection with machine learning methods at Forsmark

Sjögren, Simon January 2023 (has links)
Nuclear power plants are inherently complex systems. While the technology has been used to generate electrical power for many decades, process monitoring continuously evolves. There is always room for improvement in terms of maximizing the availability by reducing the risks of problems and errors. In this context, automated monitoring systems have become important tools – not least with the rapid progress being made in the field of data analytics thanks to ever increasing amounts of processing power. There are many different types of models that can be utilized for identifying anomalies. Some rely on physical properties and theoretical relations, while others rely more on the patterns of historical data. In this thesis, a data-driven approach using a hierarchical autoencoder framework has been developed for the purposes of anomaly detection at the Swedish nuclear power plant Forsmark. The model is first trained to recognize normal operating conditions. The trained model then creates reference values and calculates the deviations in relation to real data in order to identify any issues. This proof-of-concept has been evaluated and benchmarked against a currently used hybrid model with more physical modeling properties in order to identify benefits and drawbacks. Generally speaking, the created model has performed in line with expectations. The currently used tool is more flexible in its understanding of different plant states and is likely better at determining root causes thanks to its physical modeling properties. However, the created autoencoder framework does bring other advantages. For instance, it allows for a higher time resolution thanks to its relatively low calculation intensity. Additionally, thanks to its purely data-driven characteristics, it offers great opportunities for future reconfiguration and adaptation with different signal selections.
140

Enhancing failure prediction from timeseries histogram data : through fine-tuned lower-dimensional representations

Jayaraman, Vijay January 2023 (has links)
Histogram data are widely used for compressing high-frequency time-series signals due to their ability to capture distributional informa-tion. However, this compression comes at the cost of increased di-mensionality and loss of contextual details from the original features.This study addresses the challenge of effectively capturing changesin distributions over time and their contribution to failure prediction.Specifically, we focus on the task of predicting Time to Event (TTE) forturbocharger failures.In this thesis, we propose a novel approach to improve failure pre-diction by fine-tuning lower-dimensional representations of bi-variatehistograms. The goal is to optimize these representations in a waythat enhances their ability to predict component failure. Moreover, wecompare the performance of our learned representations with hand-crafted histogram features to assess the efficacy of both approaches.We evaluate the different representations using the Weibull Time ToEvent - Recurrent Neural Network (WTTE-RNN) framework, which isa popular choice for TTE prediction tasks. By conducting extensive ex-periments, we demonstrate that the fine-tuning approach yields supe-rior results compared to general lower-dimensional learned features.Notably, our approach achieves performance levels close to state-of-the-art results.This research contributes to the understanding of effective failureprediction from time series histogram data. The findings highlightthe significance of fine-tuning lower-dimensional representations forimproving predictive capabilities in real-world applications. The in-sights gained from this study can potentially impact various indus-tries, where failure prediction is crucial for proactive maintenanceand reliability enhancement.

Page generated in 0.0285 seconds