• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

DISCOVERY OF LINEAR TRAJECTORIES IN GEOGRAPHICALLY DISTRIBUTED DATASETS

JHAVER, RISHI January 2003 (has links)
No description available.
2

Hluboké neuronové sítě pro rozpoznání tváří ve videu / Deep Learning for Facial Recognition in Video

Mihalčin, Tomáš January 2018 (has links)
This diploma thesis focuses on a face recognition from a video, specifically how to aggregate feature vectors into a single discriminatory vector also called a template. It examines the issue of the extremely angled faces with respect to the accuracy of the verification. Also compares the relationship between templates made from vectors extracted from video frames and vectors from photos. Suggested hypothesis is tested by two deep convolutional neural networks, namely the well-known VGG-16 network model and a model called Fingera provided by company Innovatrics. Several experiments were carried out in the course of the work and the results of which confirm the success of proposed technique. As an accuracy metric was chosen the ROC curve. For work with neural networks was used framework Caffe.
3

Tackling the Communication Bottlenecks of Distributed Deep Learning Training Workloads

Ho, Chen-Yu 08 1900 (has links)
Deep Neural Networks (DNNs) find widespread applications across various domains, including computer vision, recommendation systems, and natural language processing. Despite their versatility, training DNNs can be a time-consuming process, and accommodating large models and datasets on a single machine is often impractical. To tackle these challenges, distributed deep learning (DDL) training workloads have gained increasing significance. However, DDL training introduces synchronization requirements among nodes, and the mini-batch stochastic gradient descent algorithm heavily burdens network connections. This dissertation proposes, analyzes, and evaluates three solutions addressing the communication bottleneck in DDL learning workloads. The first solution, SwitchML, introduces an in-network aggregation (INA) primitive that accelerates DDL workloads. By aggregating model updates from multiple workers within the network, SwitchML reduces the volume of exchanged data. This approach, which incorporates switch processing, end-host protocols, and Deep Learning frameworks, enhances training speed by up to 5.5 times for real-world benchmark models. The second solution, OmniReduce, is an efficient streaming aggregation system designed for sparse collective communication. It optimizes performance for parallel computing applications, such as distributed training of large-scale recommendation systems and natural language processing models. OmniReduce achieves maximum effective bandwidth utilization by transmitting only nonzero data blocks and leveraging fine-grained parallelization and pipelining. Compared to state-of-the-art TCP/IP and RDMA network solutions, OmniReduce outperforms them by 3.5 to 16 times, delivering significantly better performance for network-bottlenecked DNNs, even at 100 Gbps. The third solution, CoInNetFlow, addresses congestion in shared data centers, where multiple DNN training jobs compete for bandwidth on the same node. The study explores the feasibility of coflow scheduling methods in hierarchical and multi-tenant in-network aggregation communication patterns. CoInNetFlow presents an innovative utilization of the Sincronia priority assignment algorithm. Through packet-level DDL job simulation, the research demonstrates that appropriate weighting functions, transport layer priority scheduling, and gradient compression on low-priority tensors can significantly improve the median Job Completion Time Inflation by over $70\%$. Collectively, this dissertation contributes to mitigating the network communication bottleneck in distributed deep learning. The proposed solutions can enhance the efficiency and speed of distributed deep learning systems, ultimately improving the performance of DNN training across various domains.
4

Využití neuronových sítí pro predikaci síťového provozu / Neural network utilization for etwork traffic predictions

Pavela, Radek January 2009 (has links)
In this master’s thesis are discussed static properties of network traffic trace. There are also addressed the possibility of a predication with a focus on neural networks. Specifically, therefore recurrent neural networks. Training data were downloaded from freely accessible on the internet link. This is the captured packej of traffic of LAN network in 2001. They are not the most actual, but it is possible to use them to achieve the objective results of the work. Input data needed to be processed into acceptable form. In the Visual Studio 2005 was created program to aggregate the intensities of these data. The best combining appeared after 100 ms. This was achieved by the input vector, which was divided according to the needs of network training and testing part. The various types of networks operate with the same input data, thereby to make more objective results. In practical terms, it was necessary to verify the two principles. Principle of training and the principle of generalization. The first of the nominated designs require stoking training and verification training by using gradient and mean square error. The second one represents unknown designs application on neural network. It was monitored the response of network to these input data. It can be said that the best model seemed the Layer recurrent neural network (LRN). So, it was a solution developed in this direction, followed by searching the appropriate option of recurrent network and optimal configuration. Found a variant of topology is 10-10-1. It was used the Matlab 7.6, with an extension of Neural Network toolbox 6. The results are processed in the form of graphs and the final appreciation. All successful models and network topologies are on the enclosed CD. However, Neural Network toolbox reported some problems when importing networks. In creating this work wasn’t import of network functions practically used. The network can be imported, but the majority appear to be non-trannin. Unsuccessful models of networks are not presented in this master’s thesis, because it would be make a deterioration of clarity and orientation.

Page generated in 0.1048 seconds