291 |
Human gait movement analysis using wearable solutions and Artificial IntelligenceDavarzani, Samaneh 09 December 2022 (has links) (PDF)
Gait recognition systems have gained tremendous attention due to its potential applications in healthcare, criminal investigation, sports biomechanics, and so forth. A new solution to gait recognition tasks can be provided by wearable sensors integrated in wearable objects or mobile devices. In this research a sock prototype designed with embedded soft robotic sensors (SRS) is implemented to measure foot ankle kinematic and kinetic data during three experiments designed to track participants’ feet ankle movement. Deep learning and statistical methods have been employed to model SRS data against Motion capture system (MoCap) to determine their ability to provide accurate kinematic and kinetic data using SRS measurements. In the first study, the capacitance of SRS related to foot-ankle basic movements was quantified during the gait movements of twenty participants on a flat surface and a cross-sloped surface. I have conducted another study regarding kinematic features in which deep learning models were trained to estimate the joint angles in sagittal and frontal planes measured by a MoCap system. Participant-specific models were established for ten healthy subjects walking on a treadmill. The prototype was tested at various walking speeds to assess its ability to track movements for multiple speeds and generalize models for estimating joint angles in sagittal and frontal planes. The focus of the last study is measuring the kinetic features and the goal is determining the validity of SRS measurements, to this end the pressure data measured with SRS embedded into the sock prototype would be compared with the force plate data.
|
292 |
Software Requirements Classification Using Word Embeddings and Convolutional Neural NetworksFong, Vivian Lin 01 June 2018 (has links) (PDF)
Software requirements classification, the practice of categorizing requirements by their type or purpose, can improve organization and transparency in the requirements engineering process and thus promote requirement fulfillment and software project completion. Requirements classification automation is a prominent area of research as automation can alleviate the tediousness of manual labeling and loosen its necessity for domain-expertise.
This thesis explores the application of deep learning techniques on software requirements classification, specifically the use of word embeddings for document representation when training a convolutional neural network (CNN). As past research endeavors mainly utilize information retrieval and traditional machine learning techniques, we entertain the potential of deep learning on this particular task. With the support of learning libraries such as TensorFlow and Scikit-Learn and word embedding models such as word2vec and fastText, we build a Python system that trains and validates configurations of Naïve Bayes and CNN requirements classifiers. Applying our system to a suite of experiments on two well-studied requirements datasets, we recreate or establish the Naïve Bayes baselines and evaluate the impact of CNNs equipped with word embeddings trained from scratch versus word embeddings pre-trained on Big Data.
|
293 |
GRAPH NEURAL NETWORKS BASED ON MULTI-RATE SIGNAL DECOMPOSITION FOR BEARING FAULT DIAGNOSIS.pdfGuanhua Zhu (15454712) 12 May 2023 (has links)
<p>Roller bearings are the common components used in the mechanical systems for mechanical processing and production. The running state of roller bearings often determines the machining accuracy and productivity on a manufacturing line. Roller bearing failure may lead to the shutdown of production lines, resulting in serious economic losses. Therefore, the research on roller bearing fault diagnosis has a great value. This thesis research first proposes a method of signal frequency spectral resampling to tackle the problem of bearing fault detection at different rotating speeds using a single speed dataset for training the network such as the one dimensional convolutional neural network (1D CNN). Second, this research work proposes a technique to connect the graph structures constructed from spectral components of the different bearing fault frequency bands into a sparse graph structure, so that the fault identification can be carried out effectively through a graph neural network in terms of the computation load and classification rate. Finally, the frequency spectral resampling method for feature extraction is validated using our self-collected datasets. The performance of the graph neural network with our proposed sparse graph structure is validated using the Case Western Reserve University (CWRU) dataset as well as our self-collected datasets. The results show that our proposed method achieves higher bearing fault classification accuracy than those recently proposed by other researchers using machine learning approaches and neural networks.</p>
|
294 |
Assessing the Streamline Plausibility Through Convex Optimization for Microstructure Informed Tractography(COMMIT) with Deep Learning / Bedömning av strömlinjeformligheten genom konvex optimering för mikrostrukturinformerad traktografi (COMMIT) med djupinlärningWan, Xinyi January 2023 (has links)
Tractography is widely used in the brain connectivity study from diffusion magnetic resonance imaging data. However, lack of ground truth and plenty of anatomically implausible streamlines in the tractograms have caused challenges and concerns in the use of tractograms such as brain connectivity study. Tractogram filtering methods have been developed to remove the faulty connections. In this study, we focus on one of these filtering methods, Convex Optimization Modeling for Microstructure Informed Tractography (COMMIT), which tries to find a set of streamlines that best reconstruct the diffusion magnetic resonance imaging data with global optimization approach. There are biases with this method when assessing individual streamlines. So a method named randomized COMMIT(rCOMMIT) is proposed to obtain multiple assessments for each streamline. The acceptance rate from this method is introduced to the streamlines and divides them into three groups, which are regarded as pseudo ground truth from rCOMMIT. Therefore, the neural networks are able to train on the pseudo ground truth on classification tasks. The trained classifiers distinguish the obtained groups of plausible and implausible streamlines with accuracy around 77%. Following the same methodology, the results from rCOMMIT and randomized SIFT are compared. The intersections between two methods are analyzed with neural networks as well, which achieve accuracy around 87% in binary task between plausible and implausible streamlines.
|
295 |
Extracting Topography from Historic Topographic Maps Using GIS-Based Deep LearningPierce, Briar Z, Ernenwein, Eileen G 25 April 2023 (has links)
Historical topographic maps are valuable resources for studying past landscapes, but two-dimensional cartographic features are unsuitable for geospatial analysis. They must be extracted and converted into digital formats. This has been accomplished by researchers using sophisticated image processing and pattern recognition techniques, and more recently, artificial intelligence. While these methods are sometimes successful, they require a high level of technical expertise, limiting their accessibility. This research presents a straightforward method practitioners can use to create digital representations of historical topographic data within commercially available Geographic Information Systems (GIS) software. This study uses convolutional neural networks to extract elevation contour lines from a 1940 United States Geological Survey (USGS) topographic map in Sevier County, TN, ultimately producing a Digital Elevation Model (DEM). The topographically derived DEM (TOPO-DEM) is compared to a modern LiDAR-derived DEM to analyze its quality and utility. GIS-capable historians, archaeologists, geographers, and others can use this method in their research and land management practices.
|
296 |
SINGLE MOLECULE ANALYSIS AND WAVEFRONT CONTROL WITH DEEP LEARNINGPeiyi Zhang (15361429) 27 April 2023 (has links)
<p> </p>
<p> Analyzing single molecule emission patterns plays a critical role in retrieving the structural and physiological information of their tagged targets, and further, understanding their interactions and cellular context. These emission patterns of tiny light sources (i.e. point spread functions, PSFs) simultaneously encode information such as the molecule’s location, orientation, the environment within the specimen, and the paths the emitted photons took before being captured by the camera. However, retrieving multiple classes of information beyond the 3D position from complex or high-dimensional single molecule data remains challenging, due to the difficulties in perceiving and summarizing a comprehensive yet succinct model. We developed smNet, a deep neural network that can extract multiplexed information near the theoretical limit from both complex and high-dimensional point spread functions. Through simulated and experimental data, we demonstrated that smNet can be trained to efficiently extract both molecular and specimen information, such as molecule location, dipole orientation, and wavefront distortions from complex and subtle features of the PSFs, which otherwise are considered too complex for established algorithms. </p>
<p> Single molecule localization microscopy (SMLM) forms super-resolution images with a resolution of several to tens of nanometers, relying on accurate localization of molecules’ 3D positions from isolated single molecule emission patterns. However, the inhomogeneous refractive indices distort and blur single molecule emission patterns, reduce the information content carried by each detected photon, increase localization uncertainty, and thus cause significant resolution loss, which is irreversible by post-processing. To compensate tissue induced aberrations, conventional sensorless adaptive optics methods rely on iterative mirror-changes and image-quality metrics to compensate aberrations. But these metrics result in inconsistent, and sometimes opposite, metric responses which fundamentally limited the efficacy of these approaches for aberration correction in tissues. Bypassing the previous iterative trial-then-evaluate processes, we developed deep learning driven adaptive optics (DL-AO), for single molecule localization microscopy (SMLM) to directly infer wavefront distortion and compensate distortion near real-time during data acquisition. our trained deep neural network monitors the individual emission patterns from single molecule experiments, infers their shared wavefront distortion, feeds the estimates through a dynamic filter (Kalman), and drives a deformable mirror to compensate sample induced aberrations. We demonstrated that DL-AO restores single molecule emission patterns approaching the conditions untouched by specimen and improves the resolution and fidelity of 3D SMLM through brain tissues over 130 µm, with as few as 3-20 mirror changes.</p>
|
297 |
Deep Learning Based Crop Row DetectionDoha, Rashed Mohammad 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Detecting crop rows from video frames in real time is a fundamental challenge in the field of precision agriculture. Deep learning based semantic segmentation method, namely U-net, although successful in many tasks related to precision agriculture, performs poorly for solving this task. The reasons include paucity of large scale labeled datasets in this domain, diversity in crops, and the diversity of appearance of the same crops at various stages of their growth. In this work, we discuss the development of a practical real-life crop row
detection system in collaboration with an agricultural sprayer company. Our proposed method takes the output of semantic segmentation using U-net, and then apply a clustering based probabilistic temporal calibration which can adapt to different fields and crops without the need for retraining the network. Experimental results validate that our method can be used for both refining the results of the U-net to reduce errors and also for frame interpolation of the input video stream. Upon the availability of more labeled data, we switched our approach from a semi-supervised model to a fully supervised end-to-end crop row detection model using a Feature Pyramid Network or FPN. Central to the FPN is a pyramid pooling module that extracts features from the input image at multiple resolutions. This results in the network’s ability to use both local and global features in classifying pixels to be crop rows. After training the FPN on the labeled dataset, our method obtained a mean IoU or Jaccard Index score of over 70% as reported on the test set. We trained our method on only a subset of the corn dataset and tested its performance on multiple variations of weed pressure and crop growth stages to verify that the performance does translate over the variations and is consistent across the entire dataset.
|
298 |
Predicting Game Level Difficulty Using Deep Neural Networks / Uppskattning av spelbanors svårighetsgrad med djupa neurala nätverkPurmonen, Sami January 2017 (has links)
We explored the usage of Monte Carlo tree search (MCTS) and deep learning in order to predict game level difficulty in Candy Crush Saga (Candy) measured as number of attempts per success. A deep neural network (DNN) was trained to predict moves from game states from large amounts of game play data. The DNN played a diverse set of levels in Candy and a regression model was fitted to predict human difficulty from bot difficulty. We compared our results to an MCTS bot. Our results show that the DNN can make estimations of game level difficulty comparable to MCTS in substantially shorter time. / Vi utforskade användning av Monte Carlo tree search (MCTS) och deep learning för attuppskatta banors svårighetsgrad i Candy Crush Saga (Candy). Ett deep neural network(DNN) tränades för att förutse speldrag från spelbanor från stora mängder speldata. DNN:en spelade en varierad mängd banor i Candy och en modell byggdes för att förutsemänsklig svårighetsgrad från DNN:ens svårighetsgrad. Resultatet jämfördes medMCTS. Våra resultat indikerar att DNN:ens kan göra uppskattningar jämförbara medMCTS men på substantiellt kortare tid.
|
299 |
Reconstruction and recommendation of realistic 3D models using cGANs / Rekonstruktion och rekommendation av realistiska 3D-modeller som använder cGANsVillanueva Aylagas, Mónica January 2018 (has links)
Three-dimensional modeling is the process of creating a representation of a surface or object in three dimensions via a specialized software where the modeler scans a real-world object into a point cloud, creates a completely new surface or edits the selected representation. This process can be challenging due to factors like the complexity of the 3D creation software or the number of dimensions in play. This work proposes a framework that recommends three types of reconstructions of an incomplete or rough 3D model using Generative AdversarialNetworks (GANs). These reconstructions follow the distribution of real data, resemble the user model and stay close to the dataset while keeping features of the input, respectively. The main advantage of this approach is the acceptance of 3Dmodels as input for the GAN instead of latent vectors, which prevents the need of training an extra network to project the model into the latent space. The systems are evaluated both quantitatively and qualitatively. The quantitative measure lies upon the Intersection over Union (IoU) metric while the quantitative evaluation is measured by a user study. Experiments show that it is hard to create a system that generates realistic models, following the distribution of the dataset, since users have different opinions on what is realistic. However, similarity between the user input and the reconstruction is well accomplished and, in fact, the most valued feature for modelers. / Tredimensionell modellering är processen att skapa en representation av en yta eller ett objekt i tre dimensioner via en specialiserad programvara där modelleraren skannar ett verkligt objekt i ett punktmoln, skapar en helt ny yta eller redigerar den valda representationen. Denna process kan vara utmanande på grund av faktorer som komplexiteten i den 3D-skapande programvaran eller antalet dimensioner i spel. I det här arbetet föreslås ett ramverk som rekommenderar tre typer av rekonstruktioner av en ofullständig eller grov 3D-modell med Generative Adversarial Networks (GAN). Dessa rekonstruktioner följer distributionen av reella data, liknar användarmodellen och håller sig nära datasetet medan respektive egenskaper av ingången behålls. Den främsta fördelen med detta tillvägagångssätt är acceptansen av 3D-modeller som input för GAN istället för latentavektorer, vilket förhindrar behovet av att träna ett extra nätverk för att projicera modellen i latent rymd. Systemen utvärderas både kvantitativt och kvalitativt. Den kvantitativa åtgärden beror på Intersection over Union (IoU) metrisk medan den kvantitativa utvärderingen mäts av en användarstudie. Experiment visar att det är svårt att skapa ett system som genererar realistiska modeller efter distributionen av datasetet, eftersom användarna har olika åsikter om vad som är realistiskt. Likvärdighet mellan användarinmatning och rekonstruktion är väl genomförd och i själva verket den mest uppskattade funktionen för modellerare.
|
300 |
Deep Active Learning for Short-Text Classification / Aktiv inlärning i djupa nätverk för klassificering av korta texterZhao, Wenquan January 2017 (has links)
In this paper, we propose a novel active learning algorithm for short-text (Chinese) classification applied to a deep learning architecture. This topic thus belongs to a cross research area between active learning and deep learning. One of the bottlenecks of deeplearning for classification is that it relies on large number of labeled samples, which is expensive and time consuming to obtain. Active learning aims to overcome this disadvantage through asking the most useful queries in the form of unlabeled samples to belabeled. In other words, active learning intends to achieve precise classification accuracy using as few labeled samples as possible. Such ideas have been investigated in conventional machine learning algorithms, such as support vector machine (SVM) for imageclassification, and in deep neural networks, including convolutional neural networks (CNN) and deep belief networks (DBN) for image classification. Yet the research on combining active learning with recurrent neural networks (RNNs) for short-text classificationis rare. We demonstrate results for short-text classification on datasets from Zhuiyi Inc. Importantly, to achieve better classification accuracy with less computational overhead,the proposed algorithm shows large reductions in the number of labeled training samples compared to random sampling. Moreover, the proposed algorithm is a little bit better than the conventional sampling method, uncertainty sampling. The proposed activelearning algorithm dramatically decreases the amount of labeled samples without significantly influencing the test classification accuracy of the original RNNs classifier, trainedon the whole data set. In some cases, the proposed algorithm even achieves better classification accuracy than the original RNNs classifier. / I detta arbete studerar vi en ny aktiv inlärningsalgoritm som appliceras på en djup inlärningsarkitektur för klassificering av korta (kinesiska) texter. Ämnesområdet hör därmedtill ett ämnesöverskridande område mellan aktiv inlärning och inlärning i djupa nätverk .En av flaskhalsarna i djupa nätverk när de används för klassificering är att de beror avtillgången på många klassificerade datapunkter. Dessa är dyra och tidskrävande att skapa. Aktiv inlärning syftar till att överkomma denna typ av nackdel genom att generera frågor rörande de mest informativa oklassade datapunkterna och få dessa klassificerade. Aktiv inlärning syftar med andra ord till att uppnå bästa klassificeringsprestanda medanvändandet av så få klassificerade datapunkter som möjligt. Denna idé har studeratsinom konventionell maskininlärning, som tex supportvektormaskinen (SVM) för bildklassificering samt inom djupa neuronnätverk inkluderande bl.a. convolutional networks(CNN) och djupa beliefnetworks (DBN) för bildklassificering. Emellertid är kombinationenav aktiv inlärning och rekurrenta nätverk (RNNs) för klassificering av korta textersällsynt. Vi demonstrerar här resultat för klassificering av korta texter ur en databas frånZhuiyi Inc. Att notera är att för att uppnå bättre klassificeringsnoggranhet med lägre beräkningsarbete (overhead) så uppvisar den föreslagna algoritmen stora minskningar i detantal klassificerade träningspunkter som behövs jämfört med användandet av slumpvisadatapunkter. Vidare, den föreslagna algoritmen är något bättre än den konventionellaurvalsmetoden, osäkherhetsurval (uncertanty sampling). Den föreslagna aktiva inlärningsalgoritmen minska dramatiskt den mängd klassificerade datapunkter utan att signifikant påverka klassificeringsnoggranheten hos den ursprungliga RNN-klassificeraren när den tränats på hela datamängden. För några fall uppnår den föreslagna algoritmen t.o.m.bättre klassificeringsnoggranhet än denna ursprungliga RNN-klassificerare.
|
Page generated in 0.0713 seconds