• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 3
  • 1
  • Tagged with
  • 50
  • 22
  • 22
  • 20
  • 20
  • 17
  • 16
  • 12
  • 11
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A COMPARATIVE STUDY OF DEEP-LEARNING APPROACHES FOR ACTIVITY RECOGNITION USING SENSOR DATA IN SMART OFFICE ENVIRONMENTS

Johansson, Alexander, Sandberg, Oscar January 2018 (has links)
Syftet med studien är att jämföra tre deep learning nätverk med varandra för att ta reda på vilket nätverk som kan producera den högsta uppmätta noggrannheten. Noggrannheten mäts genom att nätverken försöker förutspå antalet personer som vistas i rummet där observation äger rum. Utöver att jämföra de tre djupinlärningsnätverk med varandra, kommer vi även att jämföra dem med en traditionell metoder inom maskininlärning - i syfte för att ta reda på ifall djupinlärningsnätverken presterar bättre än vad traditionella metoder gör. I studien används design and creation. Design and creation är en forskningsmetodologi som lägger stor fokus på att utveckla en IT produkt och använda produkten som dess bidrag till ny kunskap. Metodologin har fem olika faser, vi valde att göra en iterativ process mellan utveckling- och utvärderingfaserna. Observation är den datagenereringsmetod som används i studien för att samla in data. Datagenereringen pågick under tre veckor och under tiden hann 31287 rader data registreras i vår databas. Ett av våra nätverk fick vi en noggrannhet på 78.2%, de andra två nätverken fick en noggrannhet på 45.6% respektive 40.3%. För våra traditionella metoder använde vi ett beslutsträd med två olika formler, de producerade en noggrannhet på 61.3% respektive 57.2%. Resultatet av denna studie visar på att utav de tre djupinlärningsnätverken kan endast en av djupinlärningsnätverken producera en högre noggrannhet än de traditionella maskininlärningsmetoderna. Detta resultatet betyder nödvändigtvis inte att djupinlärningsnätverk i allmänhet kan producera en högre noggrannhet än traditionella maskininlärningsmetoder. Ytterligare arbete som kan göras är följande: ytterligare experiment med datasetet och hyperparameter av djupinlärningsnätverken, samla in mer data och korrekt validera denna data och jämföra fler djupinlärningsnätverk och maskininlärningsmetoder. / The purpose of the study is to compare three deep learning networks with each other to evaluate which network can produce the highest prediction accuracy. Accuracy is measured as the networks try to predict the number of people in the room where observation takes place. In addition to comparing the three deep learning networks with each other, we also compare the networks with a traditional machine learning approach - in order to find out if deep learning methods perform better than traditional methods do. This study uses design and creation. Design and creation is a methodology that places great emphasis on developing an IT product and uses the product as its contribution to new knowledge. The methodology has five different phases; we choose to make an iterative process between the development and evaluation phases. Observation is the data generation method used to collect data. Data generation lasted for three weeks, resulting in 31287 rows of data recorded in our database. One of our deep learning networks produced an accuracy of 78.2% meanwhile, the two other approaches produced an accuracy of 45.6% and 40.3% respectively. For our traditional method decision trees were used, we used two different formulas and they produced an accuracy of 61.3% and 57.2% respectively. The result of this thesis shows that out of the three deep learning networks included in this study, only one deep learning network is able to produce a higher predictive accuracy than the traditional ML approaches. This result does not necessarily mean that deep learning approaches in general, are able to produce a higher predictive accuracy than traditional machine learning approaches. Further work that can be made is the following: further experimentation with the dataset and hyperparameters, gather more data and properly validate this data and compare more and other deep learning and machine learning approaches.
32

Automatic Liver and Tumor Segmentation from CT Scan Images using Gabor Feature and Machine Learning Algorithms

Shrestha, Ujjwal 19 December 2018 (has links)
No description available.
33

TOWARDS REVERSE ENGINEERING DEEP NEURAL NETWORKS ON EDGE DEVICES

Ruoyu Wu (18837580) 20 June 2024 (has links)
<p dir="ltr">Deep Neural Networks (DNNs) have been deployed on edge devices for numerous applications, ranging from computer vision, speech recognition, and anomaly detection. When deployed on edge devices, dedicated DNN compilers are used to compile DNNs into binaries to exploit instruction set architectures’ (ISAs’) features and hardware accelerators (e.g., NPU, GPU). These DNN binaries on edge devices process sensitive user information, conduct critical missions, and are considered confidential intellectual property.</p><p dir="ltr">From the security standpoint, the ability to reverse engineer such binaries (i.e., recovering the original, high-level representation of the implemented DNN) enables several applications, such as DNN models stealing, gray/white-box adversarial machine learning attacks and defenses, and backdoor detection. However, no existing reverse engineering technique can recover a high-level representation of a DNN model from its compiled binary code.</p><p dir="ltr">In this dissertation, we propose the following pioneering research for reverse engineering DNN on the edge device. (i) We design and implement the first compiler- and ISA-agnostic DNN decompiler, DnD, with the static analysis technique, capable of extracting DNN models from DNN binaries running on CPU-only devices without the hardware accelerator. We show that our decompiler can perfectly recover DNN models from different DNN binaries. Furthermore, it can extract DNN models used by real-world micro-controllers and enable white-box adversarial machine learning attacks against the DNN models. (ii) We design and implement a novel data-driven approach, NeuroScope, based on dynamic analysis and machine learning to reverse engineer DNN binaries. This compiler-independent and code-feature-free approach supports a larger variety of DNN binaries across different DNN compilers and hardware platforms. We demonstrate its capability by using it to reverse engineer DNN binaries unsupported by previous approaches with high accuracy. Moreover, we showcase how NeuroScope can be used to reverse engineer a proprietary DNN binary compiled with a closed-source compiler and enable gray-box adversarial machine learning attacks.</p>
34

Data Deconvolution for Drug Prediction

Menacher, Lisa Maria January 2024 (has links)
Treating cancer is difficult as the disease is complex and drug responses often depend on the patient's characteristics. Precision medicine aims to solve this by selecting individualized treatments. Since this involves the analysis of large datasets, machine learning can be used to make the drug selection process more efficient. Traditionally, such models utilize bulk gene expression data. However, this potentially masks information from small cell populations and fails to address tumor heterogeneity. Therefore, this thesis applies data deconvolution methods to bulk gene expression data and estimates the corresponding cell type-specific gene expression profiles. This "increases" the resolution of the input data for the drug response prediction. A hold-out dataset, LODOCV and LOCOCV were used for the evaluation of this approach. Furthermore, all results are compared against a baseline model, which was trained on bulk data. Overall, the accuracy of the cell type-specific model did not show an improvement compared to the bulk model. It also prioritizes information from bulk samples, which makes the additional data unnecessary. The robustness of the cell type-specific model is slightly lower than that of the bulk model. Note, that these outcomes are not necessarily due to a flaw in the underlying concept, but may be connected to poor deconvolution results as the same reference matrix was used for the deconvolution of all bulk samples regardless of the cancer type or disease.
35

HIGH SPEED IMAGING VIA ADVANCED MODELING

Soumendu Majee (10942896) 04 August 2021 (has links)
<div>There is an increasing need to accurately image objects at a high temporal resolution for different applications in order to analyze the underlying physical, chemical, or biological processes. In this thesis, we use advanced models exploiting the image structure and the measurement process in order to achieve an improved temporal resolution. The thesis is divided into three chapters, each corresponding to a different imaging application.</div><div><br></div><div>In the first chapter, we propose a novel method to localize neurons in fluorescence microscopy images. Accurate localization of neurons enables us to scan only the neuron locations instead of the full brain volume and thus improve the temporal resolution of neuron activity monitoring. We formulate the neuron localization problem as an inverse problem where we reconstruct an image that encodes the location of the neuron centers. The sparsity of the neuron centers serves as a prior model, while the forward model comprises of shape models estimated from training data.</div><div><br></div><div>In the second chapter, we introduce multi-slice fusion, a novel framework to incorporate advanced prior models for inverse problems spanning many dimensions such as 4D computed tomography (CT) reconstruction. State of the art 4D reconstruction methods use model based iterative reconstruction (MBIR), but it depends critically on the quality of the prior modeling. Incorporating deep convolutional neural networks (CNNs) in the 4D reconstruction problem is difficult due to computational difficulties and lack of high-dimensional training data. Multi-Slice Fusion integrates the tomographic forward model with multiple low dimensional CNN denoisers along different planes to produce a 4D regularized reconstruction. The improved regularization in multi-slice fusion allows each time-frame to be reconstructed from fewer measurements, resulting in an improved temporal resolution in the reconstruction. Experimental results on sparse-view and limited-angle CT data demonstrate that Multi-Slice Fusion can substantially improve the quality of reconstructions relative to traditional methods, while also being practical to implement and train.</div><div><br></div><div>In the final chapter, we introduce CodEx, a synergistic combination of coded acquisition and a non-convex Bayesian reconstruction for improving acquisition speed in computed tomography (CT). In an ideal ``step-and-shoot'' tomographic acquisition, the object is rotated to each desired angle, and the view is taken. However, step-and-shoot acquisition is slow and can waste photons, so in practice the object typically rotates continuously in time, leading to views that are blurry. This blur can then result in reconstructions with severe motion artifacts. CodEx works by encoding the acquisition with a known binary code that the reconstruction algorithm then inverts. The CodEx reconstruction method uses the alternating direction method of multipliers (ADMM) to split the inverse problem into iterative deblurring and reconstruction sub-problems, making reconstruction practical. CodEx allows for a fast data acquisition leading to a good temporal resolution in the reconstruction.</div>
36

Applications of Deep Neural Networks in Computer-Aided Drug Design

Ahmadreza Ghanbarpour Ghouchani (10137641) 01 March 2021 (has links)
<div>Deep neural networks (DNNs) have gained tremendous attention over the recent years due to their outstanding performance in solving many problems in different fields of science and technology. Currently, this field is of interest to many researchers and growing rapidly. The ability of DNNs to learn new concepts with minimal instructions facilitates applying current DNN-based methods to new problems. Here in this dissertation, three methods based on DNNs are discussed, tackling different problems in the field of computer-aided drug design.</div><div><br></div><div>The first method described addresses the problem of prediction of hydration properties from 3D structures of proteins without requiring molecular dynamics simulations. Water plays a major role in protein-ligand interactions and identifying (de)solvation contributions of water molecules can assist drug design. Two different model architectures are presented for the prediction the hydration information of proteins. The performance of the methods are compared with other conventional methods and experimental data. In addition, their applications in ligand optimization and pose prediction is shown.</div><div><br></div><div>The design of de novo molecules has always been of interest in the field of drug discovery. The second method describes a generative model that learns to derive features from protein sequences to design de novo compounds. We show how the model can be used to generate molecules similar to the known for the targets the model have not seen before and compare with benchmark generative models.</div><div><br></div><div>Finally, it is demonstrated how DNNs can learn to predict secondary structure propensity values derived from NMR ensembles. Secondary structure propensities are important in identifying flexible regions in proteins. Protein flexibility has a major role in drug-protein binding, and identifying such regions can assist in development of methods for ligand binding prediction. The prediction performance of the method is shown for several proteins with two or more known secondary structure conformations.</div>
37

Karst Database Implementation in Minnesota: Analysis of Sinkhole Distribution

Gao, Y., Alexander, E. C., Barnes, R. J. 01 May 2005 (has links)
This paper presents the overall sinkhole distributions and conducts hypothesis tests of sinkhole distributions and sinkhole formation using data stored in the Karst Feature Database (KFD) of Minnesota. Nearest neighbor analysis (NNA) was extended to include different orders of NNA, different scales of concentrated zones of sinkholes, and directions to the nearest sinkholes. The statistical results, along with the sinkhole density distribution, indicate that sinkholes tend to form in highly concentrated zones instead of scattered individuals. The pattern changes from clustered to random to regular as the scale of the analysis decreases from 10-100 km2 to 5-30 km 2 to 2-10 km2. Hypotheses that may explain this phenomenon are: (1) areas in the highly concentrated zones of sinkholes have similar geologic and topographical settings that favor sinkhole formation; (2) existing sinkholes change the hydraulic gradient in the surrounding area and increase the solution and erosional processes that eventually form more new sinkholes.
38

Co-designing Communication Middleware and Deep Learning Frameworks for High-Performance DNN Training on HPC Systems

Awan, Ammar Ahmad 10 September 2020 (has links)
No description available.
39

Permanganate Reaction Kinetics and Mechanisms and Machine Learning Application in Oxidative Water Treatment

Zhong, Shifa 21 June 2021 (has links)
No description available.
40

Study of evaluation metrics while predicting the yield of lettuce plants in indoor farms using machine learning models

Chedayan, Divya, Geo Fernandez, Harry January 2023 (has links)
A key challenge for maximizing the world’s food supply is crop yield prediction. In this study, three machine models are used to predict the fresh weight (yield) of lettuce plants that are grown inside indoor farms hydroponically using the vertical farming infrastructure, namely, support vector regressor (SVR), random forest regressor (RFR), and deep neural network (DNN).The climate data, nutrient data, and plant growth data are passed as input to train the models to understand the growth pattern based on the available features. The study of evaluation metrics majorly covers Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), R-squared, and Adjusted R-squared values.The results of the project have shown that the Random Forest with all the features is the best model having the best results with the least cross-validated MAE score and good cross-validated Adjusted R-squared value considering that the error of the prediction is minimal. This is followed by the DNN model with minor differences in the resulting values. The Support Vector Regressor (SVR) model gave a very poor performance with a huge error value that cannot be afforded in this scenario. In this study, we have also compared various evaluating metrics mentioned above and considered the cross-validated MAE and cross-validated Adjusted R-squared metrics. According to our study, MAE had the lowest error value, which is less sensitive to the outliers and adjusted R-squared value helps to understand the variance of the target variable with the predictor variable and adjust the metric to prevent the issues of overfitting.

Page generated in 0.3937 seconds