• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1889
  • 58
  • 57
  • 38
  • 37
  • 37
  • 20
  • 14
  • 13
  • 7
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 2742
  • 2742
  • 1124
  • 984
  • 848
  • 621
  • 581
  • 500
  • 497
  • 472
  • 451
  • 447
  • 420
  • 416
  • 387
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Automatická detekce událostí ve fotbalových zápasech / An automatic football match event detection

Dvonč, Tomáš January 2020 (has links)
This diploma thesis describes methods suitable for automatic detection of events from video sequences focused on football matches. The first part of the work is focused on the analysis and creation of procedures for extracting informations from available data. The second part deals with the implementation of selected methods and neural network algorithm for corner kick detection. Two experiments were performed in this work. The first captures static information from one image and the second is focused on detection from spatio-temporal data. The output of this work is a program for automatic event detection, which can be used to interpret the results of the experiments. This work may figure as a basis to gain new knowledge about the issue and also to the further development of detection events from football.
202

Embracing Visual Experience and Data Knowledge: Efficient Embedded Memory Design for Big Videos and Deep Learning

Edstrom, Jonathon January 2019 (has links)
Energy efficient memory designs are becoming increasingly important, especially for applications related to mobile video technology and machine learning. The growing popularity of smart phones, tablets and other mobile devices has created an exponential demand for video applications in today’s society. When mobile devices display video, the embedded video memory within the device consumes a large amount of the total system power. This issue has created the need to introduce power-quality tradeoff techniques for enabling good quality video output, while simultaneously enabling power consumption reduction. Similarly, power efficiency issues have arisen within the area of machine learning, especially with applications requiring large and fast computation, such as neural networks. Using the accumulated data knowledge from various machine learning applications, there is now the potential to create more intelligent memory with the capability for optimized trade-off between energy efficiency, area overhead, and classification accuracy on the learning systems. In this dissertation, a review of recently completed works involving video and machine learning memories will be covered. Based on the collected results from a variety of different methods, including: subjective trials, discovered data-mining patterns, software simulations, and hardware power and performance tests, the presented memories provide novel ways to significantly enhance power efficiency for future memory devices. An overview of related works, especially the relevant state-of-the-art research, will be referenced for comparison in order to produce memory design methodologies that exhibit optimal quality, low implementation overhead, and maximum power efficiency. / National Science Foundation / ND EPSCoR / Center for Computationally Assisted Science and Technology (CCAST)
203

A Closer Look at Neighborhoods in Graph Based Point Cloud Scene Semantic Segmentation Networks

Itani, Hani 11 1900 (has links)
Large scale semantic segmentation is considered as one of the fundamental tasks in 3D scene understanding. Point clouds provide a basic and rich geometric representation of scenes and tangible objects. Convolutional Neural Networks (CNNs) have demonstrated an impressive success in processing regular discrete data such as 2D images and 1D audio. However, CNNs do not directly generalize to point cloud processing due to their irregular and un-ordered nature. One way to extend CNNs to point cloud understanding is to derive an intermediate euclidean representation of a point cloud by projecting onto image domain, voxelizing, or treating points as vertices of an un-directed graph. Graph-CNNs (GCNs) have demonstrated to be a very promising solution for deep learning on irregular data such as social networks, biological systems, and recently point clouds. Early works in literature for graph based point networks relied on constructing dynamic graphs in the node feature space to define a convolution kernel. Later works constructed hierarchical static graphs in 3D space for an encoder-decoder framework inspired from image segmentation. This thesis takes a closer look at both dynamic and static graph neighborhoods of graph- based point networks for the task of semantic segmentation in order to: 1) discuss a potential cause for why going deep in dynamic GCNs does not necessarily lead to an improved performance, and 2) propose a new approach in treating points in a static graph neighborhood for an improved information aggregation. The proposed method leads to an efficient graph based 3D semantic segmentation network that is on par with current state-of-the-art methods on both indoor and outdoor scene semantic segmentation benchmarks such as S3DIS and Semantic3D.
204

Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning

Alammari, Ali 05 1900 (has links)
Intelligent transportation systems (ITS) are essential tools for traffic planning, analysis, and forecasting that can utilize the huge amount of traffic data available nowadays. In this work, we aggregated detailed traffic flow sensor data, Waze reports, OpenStreetMap (OSM) features, and weather data, from California Bay Area for 6 months. Using that data, we studied three novel ITS applications using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The first experiment is an analysis of the relation between roadway shapes and accident occurrence, where results show that the speed limit and number of lanes are significant predictors for major accidents on highways. The second experiment presents a novel method for forecasting congestion severity using crowdsourced data only (Waze, OSM, and weather), without the need for traffic sensor data. The third experiment studies the improvement of traffic flow forecasting using accidents, number of lanes, weather, and time-related features, where results show significant performance improvements when the additional features where used.
205

Engagement Recognition in an E-learning Environment Using Convolutional Neural Network

Jiang, Zeting, Zhu, Kaicheng January 2021 (has links)
Background. Under the current situation, distance education has rapidly become popular among students and teachers. This educational situation has changed the traditional way of teaching in the classroom. Under this kind of circumstance, students will be required to learn independently. But at the same time, it also brings some drawbacks, and teachers cannot obtain the feedback of students’ engagement in real-time. This thesis explores the feasibility of applying a lightweight model to recognize student engagement and the practicality of the model in a distance education environment. Objectives. This thesis aims to develop and apply a lightweight model based on Convolutional Neural Network(CNN) with acceptable performance to recognize the engagement of students in the environment of distance learning. Evaluate and compare the optimized model with selected original and other models in different performance metrics. Methods. This thesis uses experiments and literature review as research methods. The literature review is conducted to select effective CNN-based models for engagement recognition and feasible strategies for optimizing chosen models. These selected and optimized models are trained, tested, evaluated and compared as independent variables in the experiments. The performance of different models is used as the dependent variable. Results. Based on the literature review results, ShuffleNet v2 is selected as the most suitable CNN architecture for solving the task of engagement recognition. Inception v3 and ResNet are used as the classic CNN architecture for comparison. The attention mechanism and replace activation function are used as optimization methods for ShuffleNet v2. The pre-experiment results show that ShuffleNet v2 using the Leaky ReLU function has the highest accuracy compared with other activation functions. The experimental results show that the optimized model performs better in engagement recognition tasks than the baseline ShuffleNet v2 model, ResNet v2 and Inception v3 models. Conclusions. Through the analysis of the experiment results, the optimized ShuffleNet v2 has the best performance and is the most suitable model for real-world applications and deployments on mobile platforms.
206

Learning to Rank with Contextual Information

Han, Peng 15 November 2021 (has links)
Learning to rank is utilized in many scenarios, such as disease-gene association, information retrieval and recommender system. Improving the prediction accuracy of the ranking model is the main target of existing works. Contextual information has a significant influence in the ranking problem, and has been proved effective to increase the prediction performance of ranking models. Then we construct similarities for different types of entities that could utilize contextual information uniformly in an extensible way. Once we have the similarities constructed by contextual information, how to uti- lize them for different types of ranking models will be the task we should tackle. In this thesis, we propose four algorithms for learning to rank with contextual informa- tion. To refine the framework of matrix factorization, we propose an area under the ROC curve (AUC) loss to conquer the sparsity problem. Clustering and sampling methods are used to utilize the contextual information in the global perspective, and an objective function with the optimal solution is proposed to exploit the contex- tual information in the local perspective. Then, for the deep learning framework, we apply the graph convolutional network (GCN) on the ranking problem with the combination of matrix factorization. Contextual information is utilized to generate the input embeddings and graph kernels for the GCN. The third method in this thesis is proposed to directly exploit the contextual information for ranking. Laplacian loss is utilized to solve the ranking problem, which could optimize the ranking matrix directly. With this loss, entities with similar contextual information will have similar ranking results. Finally, we propose a two-step method to solve the ranking problem of the sequential data. The first step in this two-step method is to generate the em- beddings for all entities with a new sampling strategy. Graph neural network (GNN) and long short-term memory (LSTM) are combined to generate the representation of sequential data. Once we have the representation of the sequential data, we could solve the ranking problem of them with pair-wise loss and sampling strategy.
207

Exploring Ocean Animal Trajectory Pattern via Deep Learning

Wang, Su 23 May 2016 (has links)
We trained a combined deep convolutional neural network to predict seals’ age (3 categories) and gender (2 categories). The entire dataset contains 110 seals with around 489 thousand location records. Most records are continuous and measured in a certain step. We created five convolutional layers for feature representation and established two fully connected structure as age’s and gender’s classifier, respectively. Each classifier consists of three fully connected layers. Treating seals’ latitude and longitude as input, entire deep learning network, which includes 780,000 neurons and 2,097,000 parameters, can reach to 70.72% accuracy rate for predicting seals’ age and simultaneously achieve 79.95% for gender estimation.
208

SeedQuant: A Deep Learning-based Census Tool for Seed Germination of Root Parasitic Plants

Ramazanova, Merey 30 April 2020 (has links)
Witchweeds and broomrapes are root parasitic weeds that represent one of the main threats to global food security. By drastically reducing host crops’ yield, the parasites are often responsible for enormous economic losses estimated in billions of dollars annually. Parasitic plants rely on a chemical cue in the rhizosphere, indicating the presence of a host plant in proximity. Using this host dependency, research in parasitic plants focuses on understanding the necessary triggers for parasitic seeds germination, to either reduce their germination in presence of crops or provoke germination without hosts (i.e. suicidal germination). For this purpose, a number of synthetic analogs and inhibitors have been developed and their biological activities studied on parasitic plants around the world using various protocols. Current studies are using germination-based bioassays, where pre-conditioned parasitic seeds are placed in the presence of a chemical or plant root exudates, from which the germination ratio is assessed. Although these protocols are very sensitive at the chemical level, the germination rate recording is time consuming, represents a challenging task for researchers, and could easily be sped up leveraging automated seeds detection algorithms. In order to accelerate such protocols, we propose an automatic seed censing tool using computer vision latest development. We use a deep learning approach for object detection with the algorithm Faster R-CNN to count and discriminate germinated from non-germinated seeds. Our method has shown an accuracy of 95% in counting seeds on completely new images, and reduces the counting time by a significant margin, from 5 min to a fraction of second per image. We believe our proposed software 5 “SeedQuant” will be of great help for lab bioassays to perform large scale chemicals screening for parasitic seeds applications.
209

An Empirical Study of the Distributed Ellipsoidal Trust Region Method for Large Batch Training

Alnasser, Ali 10 February 2021 (has links)
Neural networks optimizers are dominated by first-order methods, due to their inexpensive computational cost per iteration. However, it has been shown that firstorder optimization is prone to reaching sharp minima when trained with large batch sizes. As the batch size increases, the statistical stability of the problem increases, a regime that is well suited for second-order optimization methods. In this thesis, we study a distributed ellipsoidal trust region model for neural networks. We use a block diagonal approximation of the Hessian, assigning consecutive layers of the network to each process. We solve in parallel for the update direction of each subset of the parameters. We show that our optimizer is fit for large batch training as well as increasing number of processes.
210

Comparing a gang-like scheduler with the default Kubernetes scheduler in a multi-tenant serverless distributed deep learning training environment

Lövenvald, Frans-Lukas January 2021 (has links)
Systems for running distributed deep learning training on the cloud have recently been developed. An important component of a distributed deep learning job handler is its resource allocation scheduler. This scheduler allocates computing resources to parts of a distributed training architecture. In this thesis, a serverless distributed deep learning job handler using Kubernetes was built to compare the job completion time when two different Kubernetes schedulers are used. The default Kubernetes scheduler and a gang-like custom scheduler. These schedulers were compared by performing experiments with different configurations of deep learning models, resource count selection and number of concurrent jobs. No significant difference in job completion time between the schedulers could be found. However, two benefits were found in the gang scheduler compared to the default scheduler. First, prevention of resource deadlocks where one or multiple jobs are locking resources but are unable to start. Second, reduced risk of epoch straggling, where jobs are allocated too few workers to be able to complete epochs in a reasonable time. Thus preventing other jobs from using the resources locked by the straggler job.

Page generated in 0.0874 seconds