• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 140
  • 50
  • 46
  • 22
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 345
  • 83
  • 66
  • 64
  • 63
  • 44
  • 39
  • 37
  • 37
  • 36
  • 35
  • 31
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Enhanced convolution approach for CAC in ATM networks, an analytical study and implementation

Marzo i Lázaro, Josep Lluís 07 February 1997 (has links)
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.
172

Generalized Statistical Tolerance Analysis and Three Dimensional Model for Manufacturing Tolerance Transfer in Manufacturing Process Planning

January 2011 (has links)
abstract: Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research, a new three dimensional model for tolerance transfer in manufacturing process planning is presented that is user friendly in the sense that it is built upon the Coordinate Measuring Machine (CMM) readings that are readily available in any decent manufacturing facility. This model can take care of datum reference change between non orthogonal datums (squeezed datums), non-linearly oriented datums (twisted datums) etc. Graph theoretic approach based upon ACIS, C++ and MFC is laid out to facilitate its implementation for automation of the model. A totally new approach to determining dimensions and tolerances for the manufacturing process plan is also presented. Secondly, a new statistical model for the statistical tolerance analysis based upon joint probability distribution of the trivariate normal distributed variables is presented. 4-D probability Maps have been developed in which the probability value of a point in space is represented by the size of the marker and the associated color. Points inside the part map represent the pass percentage for parts manufactured. The effect of refinement with form and orientation tolerance is highlighted by calculating the change in pass percentage with the pass percentage for size tolerance only. Delaunay triangulation and ray tracing algorithms have been used to automate the process of identifying the points inside and outside the part map. Proof of concept software has been implemented to demonstrate this model and to determine pass percentages for various cases. The model is further extended to assemblies by employing convolution algorithms on two trivariate statistical distributions to arrive at the statistical distribution of the assembly. Map generated by using Minkowski Sum techniques on the individual part maps is superimposed on the probability point cloud resulting from convolution. Delaunay triangulation and ray tracing algorithms are employed to determine the assembleability percentages for the assembly. / Dissertation/Thesis / Ph.D. Mechanical Engineering 2011
173

Metodologia de redução dos espectros de correlação angular perturbada / Methodology for reduction of perturbed angular correlation spectra

Rogerio Tramontano 25 April 2003 (has links)
Medidas de correlação angular perturbada diferencial no tempo - TDPAC - foram efetuadas com um sistema de detetores de HPGe com o objetivo de ampliar o conjunto de nuclídeos utilizáveis como sondas de prova de campo magnético e de gradiente de campo elétrico na matéria. A análise dos espectros obtidos considera a convolução angular de ordem superior a dois, o que está fora do escopo do procedimento convencional quando se utiliza o arranjo experimental padrão. O algoritmo é baseado no método dos mínimos quadrados e considera rigorosamente as incertezas estatísticas dos dados. O programa de cálculo implementado é orientado a objetos, que representam as estruturas matemáticas envolvidas na redução dos dados pelo método dos mínimos quadrados e os sistemas físicos característicos do experimento. Os detetores semicondutores mostraram-se inadequados ao estudo de materiais por TDPAC nas condições experimentais disponíveis. O método de análise proposto aqui foi aplicado à redução dos espectros obtidos em outros laboratórios, que utilizam cintiladores rápidos, resultando na determinação de parâmetros associados à estrutura cristalina para os quais a análise convencional não é sensível, em particulas dos coeficientes de atenuação temporal da correlação para cada uma das freqüências de oscilação. Esta metodologia permite calcular corretamente as incertezas nos parâmetros, notadamente nas frações de ocupação de diferentes sítios pela sonda de prova. / Time dependent perturbed angular correlation TDPAC measurements were performed with a HPGe detector array aiming to increase the set of nuclides usable as magnetic field and electric field gradient probes in matter. The analysis of the obtained spectra takes into account the convolution of the perturbation function with the detector time response and angular correlation coefficients of order greater than two, which is not in scope of the conventional procedure. The algorithm is based on the least-squares method and considers rigorously the data statistical uncertainties. The implanted computer code is built on objects representing the mathematical entities used in data reduction by the least-squares method and the physical components of the experiment. The semiconductor detectors were found unsuitable for material study through TDPAC in the available experimental conditions. The analysis method proposed here was applied to the reduction of spectra obtained by other Laboratories that use fast scintillators, giving crystalline structure related parameters which cannot be determined in the conventional analysis, particularly correlation time attenuation parameters for each oscillation frequency. The uncertainties in the fitted parameters are correctly calculated by this method notably in the site probe occupation fractions.
174

Real-time Wind Direction Filtering for Sailboat Race Tracking

Nielsen, Emil January 2015 (has links)
In this paper, an algorithm that calculates the direction of the wind from the directions of sailors during fleet races is proposed. The algorithm is based on a 1-D spatial convolution and it is named Convolution Based Direction Filtering (CBDF). The CBDF-algorithm is used in the TracTrac race client that broadcasts sailboat races in real-time. The fact that the proposed algorithm is polynomial makes it suitable, to be used as a real-time application inside TracTrac, even for large fleets. More concretely, we show that the time complexity of the CBDF-algorithm is O(n2), in the worst-case, where n > 0 is the number of boats in competition. It is also shown that in more realistic sailing scenarios, the CBDF-algorithm is in fact a linear algorithm.
175

Joint Eigenfunctions On The Heisenberg Group And Support Theorems On Rn

Samanta, Amit 05 1900 (has links) (PDF)
This work is concerned with two different problems in harmonic analysis, one on the Heisenberg group and other on Rn, as described in the following two paragraphs respectively. Let Hn be the (2n + 1)-dimensional Heisenberg group, and let K be a compact subgroup of U(n), such that (K, Hn) is a Gelfand pair. Also assume that the K-action on Cn is polar. We prove a Hecke-Bochner identity associated to the Gelfand pair (K, Hn). For the special case K = U(n), this was proved by Geller, giving a formula for the Weyl transform of a function f of the type f = Pg, where g is a radial function, and P a bigraded solid U(n)-harmonic polynomial. Using our general Hecke-Bochner identity we also characterize (under some conditions) joint eigenfunctions of all differential operators on Hn that are invariant under the action of K and the left action of Hn . We consider convolution equations of the type f * T = g, where f, g ε Lp(Rn) and T is a compactly supported distribution. Under natural assumptions on the zero set of the Fourier transform of T , we show that f is compactly supported, provided g is.
176

Rozpoznávání znaků z realných scén pomocí neuronových sítí / Character recognition of real scenes using neural networks

Fiala, Petr January 2014 (has links)
This thesis focuses on a problem of character recognition from real scenes, which has earned significant amount of attention with the development of modern technology. The aim of the paper is to use an algorithm that has state-of-art performance on standard data sets and apply it for the recognition task. The chosen algorithm is a convolution network with deep structure where the application of the specified model has not yet been published. The implemented solution is built on theoretical parts which are provided in comprehensive overview. Two types of neural network are used in the practical part: a multilayer perceptron and the convolution model. But as the complex structure of the convolution networks gives much better performance compare with the classification error of the MLP on the first data set, only the convolution structure is used in the further experiments. The model is validated on two public data sets that correspond with the specification of the task. In order to obtain an optimal solution based on the data structure several tests had been made on the modificated network and with various adjustments on the input data. Presented solution provided comparable prediction rate compare to the best results of the other studies while using artificially generated learning pattern. In conclusion, the thesis describes possible extensions and improvements of the model, which should lead to the decrease of the classification error.
177

Arquitetura do módulo de convolução para visão computacional baseada em FPGA / Convolution module architecture for computer vision based on FPGA

Almeida, Carlos Caetano de, 1976- 07 August 2015 (has links)
Orientador: Eurípedes Guilherme de Oliveira Nóbrega / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-27T23:49:29Z (GMT). No. of bitstreams: 1 Almeida_CarlosCaetanode_M.pdf: 5316196 bytes, checksum: 8c3ec7a0c5709f2507df4dbc54c137b0 (MD5) Previous issue date: 2015 / Resumo: Esta dissertação apresenta o estudo de uma arquitetura para o processamento digital de imagens, desenvolvido através de dispositivos de hardware programável, no caso FPGA, para a implementação eficiente no domínio do tempo do algoritmo da convolução discreta, que permita sua integração em redes neurais de convolução com múltiplas camadas, conhecidas como ConvNets, visando sua aplicação na área de visão computacional. A implementação em software pode acarretar elevado custo computacional de muitos algoritmos, o que pode não atender às restrições de aplicações em tempo real, logo o uso de implementações em FPGA torna-se uma ferramenta atraente. A convolução 2D na área de visão computacional é um desses algoritmos. O uso de FPGA permite a adoção de execução concorrente para os algoritmos, por ser em hardware, possibilitando que as redes de convolução possam vir a ser adotadas em sistemas embarcados de visão computacional. Neste trabalho de pesquisa foram estudadas duas soluções. Na primeira foi implementado no FPGA o processador soft core NIOS II®, e programado o algoritmo. Na segunda solução, foi desenvolvida uma configuração em que o algoritmo foi implementado diretamente em hardware, sem a necessidade de um microprocessador tradicional. Os resultados mostram que uma redução expressiva do tempo de processamento pode ser esperada em aplicações reais. Na continuidade do trabalho, deverá ser implementado e testado o algoritmo completo como parte de uma aplicação de redes ConvNets / Abstract: This research work presents a study of the architecture applied to image processing, using programmable hardware devices, in this case FPGA, to an efficient implementation of the time domain discrete convolution algorithm, which enables its integration into multiple layers networks, known as ConvNets, aiming applications of computational vision. For some algorithms, the software implementation can imply high computational costs, which may not satisfy specific real time restrictions, which turns FPGA adoption an attractive solution. Image processing application of 2D convolution is one of these algorithms. Hardware implementation using FPGA can adopt algorithm concurrency, habilitating convolution nets to be adopted in embedded systems for computer vision applications. In this research work, two different solutions were studied. In the first solution, a soft core NIOS II® processor was implemented in a FPGA, and the convolution algorithm programmed. In the second solution, a complete hardware implemented algorithm was developed, exempting the need for a regular processor. Results show that an expressive processing time reduction may be expected in real applications. In the continuity of the research work, a complete ConvNet will be implemented and the convolution algorithm application tested in a more realistic condition / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
178

Numerické metody registrace obrazů s využitím nelineární geometrické transformace / Numerical Method of Image Registration Using Nonlinear Geometric Transform

Rára, Michael January 2019 (has links)
The goal of the thesis is creating simple software to modify entry data defected by atmospheric seeing and provide an output image, which is as much close to reality as possible. Another output is a group of images illustrating the move of every input image due to the average image of them.
179

Nízko-dimenzionální faktorizace pro "End-To-End" řečové systémy / Low-Dimensional Matrix Factorization in End-To-End Speech Recognition Systems

Gajdár, Matúš January 2020 (has links)
The project covers automatic speech recognition with neural network training using low-dimensional matrix factorization. We are describing time delay neural networks with factorization (TDNN-F) and without it (TDNN) in Pytorch language. We are comparing the implementation between Pytorch and Kaldi toolkit, where we achieve similar results during experiments with various network architectures. The last chapter describes the impact of a low-dimensional matrix factorization on End-to-End speech recognition systems and also a modification of the system with TDNN(-F) networks. Using specific network settings, we were able to achieve better results with systems using factorization. Additionally, we reduced the complexity of training by decreasing network parameters with the use of TDNN(-F) networks.
180

Umělá inteligence pro klasifikaci aplikačních služeb v síťové komunikaci / Artificial intelligence for application services classification in network communication

Jelínek, Michael January 2021 (has links)
The master thesis focuses on the selection of a suitable algorithm for the classification of selected network traffic services and its implementation. The theoretical part describes the available classification approaches together with commonly used algorithms and selected network services. The practical part focuses on the preparation and preprocessing of the dataset, selection and optimization of the classification algorithm and verifying the classification capabilities of the algorithm in the various scenarios of the dataset.

Page generated in 0.1528 seconds