• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1889
  • 58
  • 57
  • 38
  • 37
  • 37
  • 20
  • 14
  • 13
  • 7
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 2742
  • 2742
  • 1124
  • 984
  • 848
  • 621
  • 581
  • 500
  • 497
  • 472
  • 451
  • 447
  • 420
  • 416
  • 387
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Software Requirements Classification Using Word Embeddings and Convolutional Neural Networks

Fong, Vivian Lin 01 June 2018 (has links) (PDF)
Software requirements classification, the practice of categorizing requirements by their type or purpose, can improve organization and transparency in the requirements engineering process and thus promote requirement fulfillment and software project completion. Requirements classification automation is a prominent area of research as automation can alleviate the tediousness of manual labeling and loosen its necessity for domain-expertise. This thesis explores the application of deep learning techniques on software requirements classification, specifically the use of word embeddings for document representation when training a convolutional neural network (CNN). As past research endeavors mainly utilize information retrieval and traditional machine learning techniques, we entertain the potential of deep learning on this particular task. With the support of learning libraries such as TensorFlow and Scikit-Learn and word embedding models such as word2vec and fastText, we build a Python system that trains and validates configurations of Naïve Bayes and CNN requirements classifiers. Applying our system to a suite of experiments on two well-studied requirements datasets, we recreate or establish the Naïve Bayes baselines and evaluate the impact of CNNs equipped with word embeddings trained from scratch versus word embeddings pre-trained on Big Data.
292

GRAPH NEURAL NETWORKS BASED ON MULTI-RATE SIGNAL DECOMPOSITION FOR BEARING FAULT DIAGNOSIS.pdf

Guanhua Zhu (15454712) 12 May 2023 (has links)
<p>Roller bearings are the common components used in the mechanical systems for mechanical processing and production. The running state of roller bearings often determines the machining accuracy and productivity on a manufacturing line. Roller bearing failure may lead to the shutdown of production lines, resulting in serious economic losses. Therefore, the research on roller bearing fault diagnosis has a great value. This thesis research first proposes a method of signal frequency spectral resampling to tackle the problem of bearing fault detection at different rotating speeds using a single speed dataset for training the network such as the one dimensional convolutional neural network (1D CNN). Second, this research work proposes a technique to connect the graph structures constructed from spectral components of the different bearing fault frequency bands into a sparse graph structure, so that the fault identification can be carried out effectively through a graph neural network in terms of the computation load and classification rate. Finally, the frequency spectral resampling method for feature extraction is validated using our self-collected datasets. The performance of the graph neural network with our proposed sparse graph structure is validated using the Case Western Reserve University (CWRU) dataset as well as our self-collected datasets. The results show that our proposed method achieves higher bearing fault classification accuracy than those recently proposed by other researchers using machine learning approaches and neural networks.</p>
293

Assessing the Streamline Plausibility Through Convex Optimization for Microstructure Informed Tractography(COMMIT) with Deep Learning / Bedömning av strömlinjeformligheten genom konvex optimering för mikrostrukturinformerad traktografi (COMMIT) med djupinlärning

Wan, Xinyi January 2023 (has links)
Tractography is widely used in the brain connectivity study from diffusion magnetic resonance imaging data. However, lack of ground truth and plenty of anatomically implausible streamlines in the tractograms have caused challenges and concerns in the use of tractograms such as brain connectivity study. Tractogram filtering methods have been developed to remove the faulty connections. In this study, we focus on one of these filtering methods, Convex Optimization Modeling for Microstructure Informed Tractography (COMMIT), which tries to find a set of streamlines that best reconstruct the diffusion magnetic resonance imaging data with global optimization approach. There are biases with this method when assessing individual streamlines. So a method named randomized COMMIT(rCOMMIT) is proposed to obtain multiple assessments for each streamline. The acceptance rate from this method is introduced to the streamlines and divides them into three groups, which are regarded as pseudo ground truth from rCOMMIT. Therefore, the neural networks are able to train on the pseudo ground truth on classification tasks. The trained classifiers distinguish the obtained groups of plausible and implausible streamlines with accuracy around 77%. Following the same methodology, the results from rCOMMIT and randomized SIFT are compared. The intersections between two methods are analyzed with neural networks as well, which achieve accuracy around 87% in binary task between plausible and implausible streamlines.
294

Extracting Topography from Historic Topographic Maps Using GIS-Based Deep Learning

Pierce, Briar Z, Ernenwein, Eileen G 25 April 2023 (has links)
Historical topographic maps are valuable resources for studying past landscapes, but two-dimensional cartographic features are unsuitable for geospatial analysis. They must be extracted and converted into digital formats. This has been accomplished by researchers using sophisticated image processing and pattern recognition techniques, and more recently, artificial intelligence. While these methods are sometimes successful, they require a high level of technical expertise, limiting their accessibility. This research presents a straightforward method practitioners can use to create digital representations of historical topographic data within commercially available Geographic Information Systems (GIS) software. This study uses convolutional neural networks to extract elevation contour lines from a 1940 United States Geological Survey (USGS) topographic map in Sevier County, TN, ultimately producing a Digital Elevation Model (DEM). The topographically derived DEM (TOPO-DEM) is compared to a modern LiDAR-derived DEM to analyze its quality and utility. GIS-capable historians, archaeologists, geographers, and others can use this method in their research and land management practices.
295

SINGLE MOLECULE ANALYSIS AND WAVEFRONT CONTROL WITH DEEP LEARNING

Peiyi Zhang (15361429) 27 April 2023 (has links)
<p>  </p> <p>        Analyzing single molecule emission patterns plays a critical role in retrieving the structural and physiological information of their tagged targets, and further, understanding their interactions and cellular context. These emission patterns of tiny light sources (i.e. point spread functions, PSFs) simultaneously encode information such as the molecule’s location, orientation, the environment within the specimen, and the paths the emitted photons took before being captured by the camera. However, retrieving multiple classes of information beyond the 3D position from complex or high-dimensional single molecule data remains challenging, due to the difficulties in perceiving and summarizing a comprehensive yet succinct model. We developed smNet, a deep neural network that can extract multiplexed information near the theoretical limit from both complex and high-dimensional point spread functions. Through simulated and experimental data, we demonstrated that smNet can be trained to efficiently extract both molecular and specimen information, such as molecule location, dipole orientation, and wavefront distortions from complex and subtle features of the PSFs, which otherwise are considered too complex for established algorithms. </p> <p>        Single molecule localization microscopy (SMLM) forms super-resolution images with a resolution of several to tens of nanometers, relying on accurate localization of molecules’ 3D positions from isolated single molecule emission patterns. However, the inhomogeneous refractive indices distort and blur single molecule emission patterns, reduce the information content carried by each detected photon, increase localization uncertainty, and thus cause significant resolution loss, which is irreversible by post-processing. To compensate tissue induced aberrations, conventional sensorless adaptive optics methods rely on iterative mirror-changes and image-quality metrics to compensate aberrations. But these metrics result in inconsistent, and sometimes opposite, metric responses which fundamentally limited the efficacy of these approaches for aberration correction in tissues. Bypassing the previous iterative trial-then-evaluate processes, we developed deep learning driven adaptive optics (DL-AO), for single molecule localization microscopy (SMLM) to directly infer wavefront distortion and compensate distortion near real-time during data acquisition. our trained deep neural network monitors the individual emission patterns from single molecule experiments, infers their shared wavefront distortion, feeds the estimates through a dynamic filter (Kalman), and drives a deformable mirror to compensate sample induced aberrations. We demonstrated that DL-AO restores single molecule emission patterns approaching the conditions untouched by specimen and improves the resolution and fidelity of 3D SMLM through brain tissues over 130 µm, with as few as 3-20 mirror changes.</p>
296

Deep Learning Based Crop Row Detection

Doha, Rashed Mohammad 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Detecting crop rows from video frames in real time is a fundamental challenge in the field of precision agriculture. Deep learning based semantic segmentation method, namely U-net, although successful in many tasks related to precision agriculture, performs poorly for solving this task. The reasons include paucity of large scale labeled datasets in this domain, diversity in crops, and the diversity of appearance of the same crops at various stages of their growth. In this work, we discuss the development of a practical real-life crop row detection system in collaboration with an agricultural sprayer company. Our proposed method takes the output of semantic segmentation using U-net, and then apply a clustering based probabilistic temporal calibration which can adapt to different fields and crops without the need for retraining the network. Experimental results validate that our method can be used for both refining the results of the U-net to reduce errors and also for frame interpolation of the input video stream. Upon the availability of more labeled data, we switched our approach from a semi-supervised model to a fully supervised end-to-end crop row detection model using a Feature Pyramid Network or FPN. Central to the FPN is a pyramid pooling module that extracts features from the input image at multiple resolutions. This results in the network’s ability to use both local and global features in classifying pixels to be crop rows. After training the FPN on the labeled dataset, our method obtained a mean IoU or Jaccard Index score of over 70% as reported on the test set. We trained our method on only a subset of the corn dataset and tested its performance on multiple variations of weed pressure and crop growth stages to verify that the performance does translate over the variations and is consistent across the entire dataset.
297

Predicting Game Level Difficulty Using Deep Neural Networks / Uppskattning av spelbanors svårighetsgrad med djupa neurala nätverk

Purmonen, Sami January 2017 (has links)
We explored the usage of Monte Carlo tree search (MCTS) and deep learning in order to predict game level difficulty in Candy Crush Saga (Candy) measured as number of attempts per success. A deep neural network (DNN) was trained to predict moves from game states from large amounts of game play data. The DNN played a diverse set of levels in Candy and a regression model was fitted to predict human difficulty from bot difficulty. We compared our results to an MCTS bot. Our results show that the DNN can make estimations of game level difficulty comparable to MCTS in substantially shorter time. / Vi utforskade användning av Monte Carlo tree search (MCTS) och deep learning för attuppskatta banors svårighetsgrad i Candy Crush Saga (Candy). Ett deep neural network(DNN) tränades för att förutse speldrag från spelbanor från stora mängder speldata. DNN:en spelade en varierad mängd banor i Candy och en modell byggdes för att förutsemänsklig svårighetsgrad från DNN:ens svårighetsgrad. Resultatet jämfördes medMCTS. Våra resultat indikerar att DNN:ens kan göra uppskattningar jämförbara medMCTS men på substantiellt kortare tid.
298

Reconstruction and recommendation of realistic 3D models using cGANs / Rekonstruktion och rekommendation av realistiska 3D-modeller som använder cGANs

Villanueva Aylagas, Mónica January 2018 (has links)
Three-dimensional modeling is the process of creating a representation of a surface or object in three dimensions via a specialized software where the modeler scans a real-world object into a point cloud, creates a completely new surface or edits the selected representation. This process can be challenging due to factors like the complexity of the 3D creation software or the number of dimensions in play. This work proposes a framework that recommends three types of reconstructions of an incomplete or rough 3D model using Generative AdversarialNetworks (GANs). These reconstructions follow the distribution of real data, resemble the user model and stay close to the dataset while keeping features of the input, respectively. The main advantage of this approach is the acceptance of 3Dmodels as input for the GAN instead of latent vectors, which prevents the need of training an extra network to project the model into the latent space. The systems are evaluated both quantitatively and qualitatively. The quantitative measure lies upon the Intersection over Union (IoU) metric while the quantitative evaluation is measured by a user study. Experiments show that it is hard to create a system that generates realistic models, following the distribution of the dataset, since users have different opinions on what is realistic. However, similarity between the user input and the reconstruction is well accomplished and, in fact, the most valued feature for modelers. / Tredimensionell modellering är processen att skapa en representation av en yta eller ett objekt i tre dimensioner via en specialiserad programvara där modelleraren skannar ett verkligt objekt i ett punktmoln, skapar en helt ny yta eller redigerar den valda representationen. Denna process kan vara utmanande på grund av faktorer som komplexiteten i den 3D-skapande programvaran eller antalet dimensioner i spel. I det här arbetet föreslås ett ramverk som rekommenderar tre typer av rekonstruktioner av en ofullständig eller grov 3D-modell med Generative Adversarial Networks (GAN). Dessa rekonstruktioner följer distributionen av reella data, liknar användarmodellen och håller sig nära datasetet medan respektive egenskaper av ingången behålls. Den främsta fördelen med detta tillvägagångssätt är acceptansen av 3D-modeller som input för GAN istället för latentavektorer, vilket förhindrar behovet av att träna ett extra nätverk för att projicera modellen i latent rymd. Systemen utvärderas både kvantitativt och kvalitativt. Den kvantitativa åtgärden beror på Intersection over Union (IoU) metrisk medan den kvantitativa utvärderingen mäts av en användarstudie. Experiment visar att det är svårt att skapa ett system som genererar realistiska modeller efter distributionen av datasetet, eftersom användarna har olika åsikter om vad som är realistiskt. Likvärdighet mellan användarinmatning och rekonstruktion är väl genomförd och i själva verket den mest uppskattade funktionen för modellerare.
299

Deep Active Learning for Short-Text Classification / Aktiv inlärning i djupa nätverk för klassificering av korta texter

Zhao, Wenquan January 2017 (has links)
In this paper, we propose a novel active learning algorithm for short-text (Chinese) classification applied to a deep learning architecture. This topic thus belongs to a cross research area between active learning and deep learning. One of the bottlenecks of deeplearning for classification is that it relies on large number of labeled samples, which is expensive and time consuming to obtain. Active learning aims to overcome this disadvantage through asking the most useful queries in the form of unlabeled samples to belabeled. In other words, active learning intends to achieve precise classification accuracy using as few labeled samples as possible. Such ideas have been investigated in conventional machine learning algorithms, such as support vector machine (SVM) for imageclassification, and in deep neural networks, including convolutional neural networks (CNN) and deep belief networks (DBN) for image classification. Yet the research on combining active learning with recurrent neural networks (RNNs) for short-text classificationis rare. We demonstrate results for short-text classification on datasets from Zhuiyi Inc. Importantly, to achieve better classification accuracy with less computational overhead,the proposed algorithm shows large reductions in the number of labeled training samples compared to random sampling. Moreover, the proposed algorithm is a little bit better than the conventional sampling method, uncertainty sampling. The proposed activelearning algorithm dramatically decreases the amount of labeled samples without significantly influencing the test classification accuracy of the original RNNs classifier, trainedon the whole data set. In some cases, the proposed algorithm even achieves better classification accuracy than the original RNNs classifier. / I detta arbete studerar vi en ny aktiv inlärningsalgoritm som appliceras på en djup inlärningsarkitektur för klassificering av korta (kinesiska) texter. Ämnesområdet hör därmedtill ett ämnesöverskridande område mellan aktiv inlärning och inlärning i djupa nätverk .En av flaskhalsarna i djupa nätverk när de används för klassificering är att de beror avtillgången på många klassificerade datapunkter. Dessa är dyra och tidskrävande att skapa. Aktiv inlärning syftar till att överkomma denna typ av nackdel genom att generera frågor rörande de mest informativa oklassade datapunkterna och få dessa klassificerade. Aktiv inlärning syftar med andra ord till att uppnå bästa klassificeringsprestanda medanvändandet av så få klassificerade datapunkter som möjligt. Denna idé har studeratsinom konventionell maskininlärning, som tex supportvektormaskinen (SVM) för bildklassificering samt inom djupa neuronnätverk inkluderande bl.a. convolutional networks(CNN) och djupa beliefnetworks (DBN) för bildklassificering. Emellertid är kombinationenav aktiv inlärning och rekurrenta nätverk (RNNs) för klassificering av korta textersällsynt. Vi demonstrerar här resultat för klassificering av korta texter ur en databas frånZhuiyi Inc. Att notera är att för att uppnå bättre klassificeringsnoggranhet med lägre beräkningsarbete (overhead) så uppvisar den föreslagna algoritmen stora minskningar i detantal klassificerade träningspunkter som behövs jämfört med användandet av slumpvisadatapunkter. Vidare, den föreslagna algoritmen är något bättre än den konventionellaurvalsmetoden, osäkherhetsurval (uncertanty sampling). Den föreslagna aktiva inlärningsalgoritmen minska dramatiskt den mängd klassificerade datapunkter utan att signifikant påverka klassificeringsnoggranheten hos den ursprungliga RNN-klassificeraren när den tränats på hela datamängden. För några fall uppnår den föreslagna algoritmen t.o.m.bättre klassificeringsnoggranhet än denna ursprungliga RNN-klassificerare.
300

Sentiment Classification with Deep Neural Networks

Kalogiras, Vasileios January 2017 (has links)
Attitydanalys är ett delfält av språkteknologi (NLP) som försöker analysera känslan av skriven text. Detta är ett komplext problem som medför många utmaningar. Av denna anledning har det studerats i stor utsträckning. Under de senaste åren har traditionella maskininlärningsalgoritmer eller handgjord metodik använts och givit utmärkta resultat. Men den senaste renässansen för djupinlärning har växlat om intresse till end to end deep learning-modeller.Å ena sidan resulterar detta i mer kraftfulla modeller men å andra sidansaknas klart matematiskt resonemang eller intuition för dessa modeller. På grund av detta görs ett försök i denna avhandling med att kasta ljus på nyligen föreslagna deep learning-arkitekturer för attitydklassificering. En studie av deras olika skillnader utförs och ger empiriska resultat för hur ändringar i strukturen eller kapacitet hos modellen kan påverka exaktheten och sättet den representerar och ''förstår'' meningarna. / Sentiment analysis is a subfield of natural language processing (NLP) that attempts to analyze the sentiment of written text.It is is a complex problem that entails different challenges. For this reason, it has been studied extensively. In the past years traditional machine learning algorithms or handcrafted methodologies used to provide state of the art results. However, the recent deep learning renaissance shifted interest towards end to end deep learning models. On the one hand this resulted into more powerful models but on the other hand clear mathematical reasoning or intuition behind distinct models is still lacking. As a result, in this thesis, an attempt to shed some light on recently proposed deep learning architectures for sentiment classification is made.A study of their differences is performed as well as provide empirical results on how changes in the structure or capacity of a model can affect its accuracy and the way it represents and ''comprehends'' sentences.

Page generated in 0.0987 seconds