• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 242
  • 55
  • 28
  • 26
  • 13
  • 12
  • 12
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 452
  • 82
  • 54
  • 50
  • 48
  • 45
  • 44
  • 44
  • 41
  • 40
  • 37
  • 35
  • 35
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Real Time Implementation of Map Aided Positioning Using a Bayesian Approach / Realtidsimplementation av kartstödd positionering med hjälp av Bayesianska estimeringsmetoder

Svenzén, Niklas January 2002 (has links)
<p>With the simple means of a digitized map and the wheel speed signals, it is possible to position a vehicle with an accuracy comparable to GPS. The positioning problem is a non-linear filtering problem and a particle filter has been applied to solve it. Two new approaches studied are the Auxiliary Particle Filter (APF), that aims at lowerering the variance of the error, and Rao-Blackwellization that exploits the linearities in the model. The results show that these methods require problems of higher complexity to fully utilize their advantages.</p><p>Another aspect in this thesis has been to handle off-road driving scenarios, using dead reckoning. An off road detection mechanism has been developed and the results show that off-road driving can be detected accurately. The algorithm has been successfully implemented on a hand-held computer by quantizing the particle filter while keeping good filter performance.</p>
322

Rendu d'images en demi-tons par diffusion d'erreur sensible à la structure

Alain, Benoît 12 1900 (has links)
Le présent mémoire comprend un survol des principales méthodes de rendu en demi-tons, de l’analog screening à la recherche binaire directe en passant par l’ordered dither, avec une attention particulière pour la diffusion d’erreur. Ces méthodes seront comparées dans la perspective moderne de la sensibilité à la structure. Une nouvelle méthode de rendu en demi-tons par diffusion d’erreur est présentée et soumise à diverses évaluations. La méthode proposée se veut originale, simple, autant à même de préserver le caractère structurel des images que la méthode à l’état de l’art, et plus rapide que cette dernière par deux à trois ordres de magnitude. D’abord, l’image est décomposée en fréquences locales caractéristiques. Puis, le comportement de base de la méthode proposée est donné. Ensuite, un ensemble minutieusement choisi de paramètres permet de modifier ce comportement de façon à épouser les différents caractères fréquentiels locaux. Finalement, une calibration détermine les bons paramètres à associer à chaque fréquence possible. Une fois l’algorithme assemblé, toute image peut être traitée très rapidement : chaque pixel est attaché à une fréquence propre, cette fréquence sert d’indice pour la table de calibration, les paramètres de diffusion appropriés sont récupérés, et la couleur de sortie déterminée pour le pixel contribue en espérance à souligner la structure dont il fait partie. / This work covers some important methods in the domain of halftoning: analog screening, ordered dither, direct binary search, and most particularly error diffusion. The methods will be compared in the modern perspective of sensitivity to structure. A novel halftoning method is also presented and subjected to various evaluations. It produces images of visual quality comparable to that of the state-of-the-art Structure-aware Halftoning method; at the same time, it is two to three orders of magnitude faster. First is described how an image can be decomposed into its local frequency content. Then, the basic behavior of the proposed method is given. Next, a carefully chosen set of parameters is presented that allow modifications to this behavior, so as to maximize the eventual reactivity to frequency content. Finally, a calibration step determines what values the parameters should take for any local frequency information encountered. Once the algorithm is assembled, any image can be treated very efficiently: each pixel is attached to its dominant frequency, the frequency serves as lookup index to the calibration table, proper diffusion parameters are retrieved, and the determined output color contributes in expectation to underline the structure from which the pixel comes.
323

Investigation of Combinations of Vector Quantization Methods with Multidimensional Scaling / Vektorių kvantavimo metodų jungimo su daugiamatėmis skalėmis analizė

Molytė, Alma 30 June 2011 (has links)
Often there is a need to establish and understand the structure of multidimensional data: their clusters, outliers, similarity and dissimilarity. One of solution ways is a dimensionality reduction and visualization of the data. If a huge datasets is analyzed, it is purposeful to reduce the number of the data items before visualization. The area of research is reduction of the number of the data analyzed and mapping the data in a plane. In the dissertation, vector quantization methods, based on artificial neural networks, and visualization methods, based on a dimensionality reduction, have been investigated. The consecutive and integrated combinations of neural gas and multidimensional scaling have been proposed here as an alternative to combinations of self-organizing maps and multidimensional scaling. The visualization quality is estimated by König’s topology preservation measure, Spearman’s rho and MDS error. The measures allow us to evaluate the similarity preservation quantitatively after a transformation of multidimensional data into a lower dimension space. The ways of selecting the initial values of two-dimensional vectors in the consecutive combination and the first training block of the integrated combination have been proposed and the ways of assigning the initial values of two-dimensional vectors in all the training blocks, except the first one, of the integrated combination have been developed. The dependence of the quantization error on the values of training... [to full text] / Dažnai iškyla būtinybė nustatyti ir giliau pažinti daugiamačių duomenų struktūrą: susidariusius klasterius, itin išsiskiriančius objektus, objektų tarpusavio panašumą ir skirtingumą. Vienas iš sprendimų būdų – duomenų dimensijos mažinimas ir jų vizualizavimas. Kai analizuojamos didelės duomenų aibės, tikslinga prieš vizualizavimą sumažinti ne tik dimensiją, bet ir duomenų skaičių. Šio darbo tyrimų sritis yra daugiamačių duomenų skaičiaus mažinimas ir duomenų atvaizdavimas plokštumoje. Disertacijoje nagrinėjami dirbtiniais neuroniniais tinklais grindžiami vektorių kvantavimo ir dimensijos mažinimu pagrįsti vizualizavimo metodai. Kaip alternatyva saviorganizuojančių neuroninių tinklų ir daugiamačių skalių junginiams, darbe pasiūlyti nuoseklus neuroninių dujų ir daugiamačių skalių junginys bei integruotas, atsižvelgiantis į neuroninių dujų metodo mokymosi eigą ir leidžiantis gauti tikslesnę daugiamačių vektorių projekciją plokštumoje. Junginiais gautų vaizdų kokybės vertinimui pasirinkti Konigo matas, Spirmano koeficientas bei MDS paklaida. Šie matai leidžia kiekybiškai įvertinti panašumų išlaikymą po daugiamačių duomenų transformavimo į mažesnės dimensijos erdvę. Taip pat pasiūlyti dvimačių vektorių pradinių koordinačių parinkimo būdai nuosekliame junginyje ir integruoto junginio pirmame mokymo bloke bei koordinačių reikšmių priskyrimo būdai integruoto junginio kituose mokymo blokuose. Eksperimentiškai nustatyta kvantavimo paklaidos priklausomybė nuo neuroninių dujų tinklo... [toliau žr. visą tekstą]
324

Vektorių kvantavimo metodų jungimo su daugiamatėmis skalėmis analizė / Investigation of Combinations of Vector Quantization Methods with Multidimensional Scaling

Molytė, Alma 30 June 2011 (has links)
Dažnai iškyla būtinybė nustatyti ir giliau pažinti daugiamačių duomenų struktūrą: susidariusius klasterius, itin išsiskiriančius objektus, objektų tarpusavio panašumą ir skirtingumą. Vienas iš sprendimų būdų – duomenų dimensijos mažinimas ir jų vizualizavimas. Kai analizuojamos didelės duomenų aibės, tikslinga prieš vizualizavimą sumažinti ne tik dimensiją, bet ir duomenų skaičių. Šio darbo tyrimų sritis yra daugiamačių duomenų skaičiaus mažinimas ir duomenų atvaizdavimas plokštumoje. Disertacijoje nagrinėjami dirbtiniais neuroniniais tinklais grindžiami vektorių kvantavimo ir dimensijos mažinimu pagrįsti vizualizavimo metodai. Kaip alternatyva saviorganizuojančių neuroninių tinklų ir daugiamačių skalių junginiams, darbe pasiūlyti nuoseklus neuroninių dujų ir daugiamačių skalių junginys bei integruotas, atsižvelgiantis į neuroninių dujų metodo mokymosi eigą ir leidžiantis gauti tikslesnę daugiamačių vektorių projekciją plokštumoje. Junginiais gautų vaizdų kokybės vertinimui pasirinkti Konigo matas, Spirmano koeficientas bei MDS paklaida. Šie matai leidžia kiekybiškai įvertinti panašumų išlaikymą po daugiamačių duomenų transformavimo į mažesnės dimensijos erdvę. Taip pat pasiūlyti dvimačių vektorių pradinių koordinačių parinkimo būdai nuosekliame junginyje ir integruoto junginio pirmame mokymo bloke bei koordinačių reikšmių priskyrimo būdai integruoto junginio kituose mokymo blokuose. Eksperimentiškai nustatyta kvantavimo paklaidos priklausomybė nuo neuroninių dujų tinklo... [toliau žr. visą tekstą] / Often there is a need to establish and understand the structure of multidimensional data: their clusters, outliers, similarity and dissimilarity. One of solution ways is a dimensionality reduction and visualization of the data. If a huge datasets is analyzed, it is purposeful to reduce the number of the data items before visualization. The area of research is reduction of the number of the data analyzed and mapping the data in a plane. In the dissertation, vector quantization methods, based on artificial neural networks, and visualization methods, based on a dimensionality reduction, have been investigated. The consecutive and integrated combinations of neural gas and multidimensional scaling have been proposed here as an alternative to combinations of self-organizing maps and multidimensional scaling. The visualization quality is estimated by König’s topology preservation measure, Spearman’s rho and MDS error. The measures allow us to evaluate the similarity preservation quantitatively after a transformation of multidimensional data into a lower dimension space. The ways of selecting the initial values of two-dimensional vectors in the consecutive combination and the first training block of the integrated combination have been proposed and the ways of assigning the initial values of two-dimensional vectors in all the training blocks, except the first one, of the integrated combination have been developed. The dependence of the quantization error on the values of training... [to full text]
325

Pair Production and the Light-Front Vacuum

Ghorbani Ghomeshi, Ramin January 2013 (has links)
Dominated by Heisenberg's uncertainty principle, vacuum is not quantum mechanically an empty void, i.e. virtual pairs of particles appear and disappear persistently. This nonlinearity subsequently provokes a number of phenomena which can only be practically observed by going to a high-intensity regime. Pair production beyond the so-called Sauter-Schwinger limit, which is roughly the field intensity threshold for pairs to show up copiously, is such a nonlinear vacuum phenomenon. From the viewpoint of Dirac's front form of Hamiltonian dynamics, however, vacuum turns out to be trivial. This triviality would suggest that Schwinger pair production is not possible. Of course, this is only up to zero modes. While the instant form of relativistic dynamics has already been at least theoretically well-played out, the way is still open for investigating the front form. The aim of this thesis is to explore the properties of such a contradictory aspect of quantum vacuum in two different forms of relativistic dynamics and hence to investigate the possibility of finding a way to resolve this ambiguity. This exercise is largely based on the application of field quantization to light-front dynamics. In this regard, some concepts within strong field theory and light-front quantization which are fundamental to our survey have been introduced, the order of magnitude of a few important quantum electrodynamical quantities have been fixed and the basic information on a small number of nonlinear vacuum phenomena has been identified. Light-front quantization of simple bosonic and fermionic systems, in particular, the light-front quantization of a fermion in a background electromagnetic field in (1+1) dimensions is given. The light-front vacuum appears to be trivial also in this particular case. Amongst all suggested methods to resolve the aforementioned ambiguity, the discrete light-cone quantization (DLCQ) method is applied to the Dirac equation in (1+1) dimensions. Furthermore, the Tomaras-Tsamis-Woodard (TTW) solution, which expresses a method to resolve the zero-mode issue, is also revisited. Finally, the path integral formulation of quantum mechanics is discussed and, as an alternative to TTW solution, it is proposed that the worldline approach in the light-front framework may shed light on different aspects of the TTW solution and give a clearer picture of the light-front vacuum and the pair production phenomenon on the light-front.
326

Electrical Characterization of Cluster Devices

Sattar, Abdul January 2011 (has links)
The aim of the study presented in this thesis is to explore the electrical and physical properties of films of tin and lead clusters. Understanding the novel conductance properties of cluster films and related phenomenon such as coalescence is important to fabricate any cluster based devices. Coalescence is an important phenomenon in metallic cluster films. Due to coalescence the morphology of the films changes with time which changes their properties and could lead to failure in cluster devices. Coalescence is studied in Sn and Pb cluster films deposited on Si$_3$N$_4$ surfaces using Ultra High Vacuum (UHV) cluster deposition system. The conductance of the overall film is linked to the conductance of the individual necks between clusters by simulations. It is observed that the coalescence process in Sn and Pb films follows a power law in time with an exponent smaller than reported in literature. These results are substantiated by the results from previous experimental and Kinetic Monte Carlo (KMC) simulation studies at UC. Percolating films of Sn show unique conductance properties. These films are characterized using various electrode configurations, applied voltages and temperatures. The conductance measurements are performed by depositing clusters on prefabricated gold electrodes on top of Si$_3$N$_4$ substrates. Sn cluster films exhibit a variety of conductance behaviours during and after the end of deposition. It is observed that the evolution of conductance during the onsets at percolation threshold is dependent on the film morphology. Samples showing difference responses in onset also behave differently after the end of deposition. Therefore all samples were categorized according to their onset behaviour. After the end of deposition, when a bias voltage is applied, the conductance of Sn films steps up and down between various well-defined conductance levels. It is also observed that in many cases the conductance levels between which these devices jump are close to integral multiples of the conductance quantum. There are many possible explanations for the steps in conductance. One of the explanations is formation and breaking of conducting paths in the cluster films by electric field induced evaporation and electromigration respectively. The stepping behaviour is similar to that in non-volatile memory devices and hence very interesting to explore due to potential applications.
327

La renormalisation constructive pour la théorie quantique des champs non commutative

Wang, Zhituo 07 December 2011 (has links) (PDF)
Dans la partie principale de cette these on considère la theorie euclidienne constructive des champs. La théorie constructive (ou la renormalisation constructive) propose l'étude mathématiquement rigoureuse de l'existence et des propriétés non perturbatives de la théorie quantique des champs. Les méthodes traditionnelles de la théorie constructive sont les développements en amas et le groupe de renormalisation de Wilson. Mais il y a aussi des défauts de ces deux méthodes: premièrement, les techniques du développement en amas et de Mayer sont compliquées, donc sont difficiles à utiliser. Deuxièmement, ces méthodes ne peuvent pas s'appliquer pour les théories quantiques des champs noncommutatives, où il n'y a pas de localité sur l'espace et l'interaction est non-locale.Récemment une nouvelle méthode a été trouvée qui s'appelle loop vertex expansion (LVE), ou développement de vertex à boucle, qui est une combinaison de la technique des champs intermédiaires et de la formule des forêt (la formule de BKAR), qui peut résoudre ces deux problèmes avec succès.Avec cette méthode, on n'a pas besoin du développement de Mayer et le développement en amas est aussi simplifié. Et comme le terme d'interaction devient non-local aussi, cette méthode s'applique bien pour les théories quantique des champs noncommutatives, par exemple, le modèle de Grosse-Wulkenhaar, qui est un modèle λΦ4 avec un potentiel harmonique dans l'espace de Moyal. C'est le premier modèle de la théorie quantique des champs noncommutative qui est renormalisable. De plus, la fonction β est nulle quand on attend le point fixe ultraviolet de cette théorie. Donc c'est aussi un modèle naturel qu'on peut construire non-perturbativement.Dans cette thèse nous allons construire le modèle de Grosse-Wulkenhaar à 2-dimensions avec la LVE.Dans le reste de cette these nous considerons aussi la construction des varieties noncommutative par les états coherents et les polynomes topological pour les graphes de Feyman dans les théorie commutatives et noncommutatives.
328

Classification using residual vector quantization

Ali Khan, Syed Irteza 13 January 2014 (has links)
Residual vector quantization (RVQ) is a 1-nearest neighbor (1-NN) type of technique. RVQ is a multi-stage implementation of regular vector quantization. An input is successively quantized to the nearest codevector in each stage codebook. In classification, nearest neighbor techniques are very attractive since these techniques very accurately model the ideal Bayes class boundaries. However, nearest neighbor classification techniques require a large size of representative dataset. Since in such techniques a test input is assigned a class membership after an exhaustive search the entire training set, a reasonably large training set can make the implementation cost of the nearest neighbor classifier unfeasibly costly. Although, the k-d tree structure offers a far more efficient implementation of 1-NN search, however, the cost of storing the data points can become prohibitive, especially in higher dimensionality. RVQ also offers a nice solution to a cost-effective implementation of 1-NN-based classification. Because of the direct-sum structure of the RVQ codebook, the memory and computational of cost 1-NN-based system is greatly reduced. Although, as compared to an equivalent 1-NN system, the multi-stage implementation of the RVQ codebook compromises the accuracy of the class boundaries, yet the classification error has been empirically shown to be within 3% to 4% of the performance of an equivalent 1-NN-based classifier.
329

Automatic Target Recognition In Infrared Imagery

Bayik, Tuba Makbule 01 September 2004 (has links) (PDF)
The task of automatically recognizing targets in IR imagery has a history of approximately 25 years of research and development. ATR is an application of pattern recognition and scene analysis in the field of defense industry and it is still one of the challenging problems. This thesis may be viewed as an exploratory study of ATR problem with encouraging recognition algorithms implemented in the area. The examined algorithms are among the solutions to the ATR problem, which are reported to have good performance in the literature. Throughout the study, PCA, subspace LDA, ICA, nearest mean classifier, K nearest neighbors classifier, nearest neighbor classifier, LVQ classifier are implemented and their performances are compared in the aspect of recognition rate. According to the simulation results, the system, which uses the ICA as the feature extractor and LVQ as the classifier, has the best performing results. The good performance of this system is due to the higher order statistics of the data and the success of LVQ in modifying the decision boundaries.
330

The Implementation Complexity Of Finite Impulse Response Digital Filters Under Different Coefficient Quantization Schemes And Realization Structures

Akyurek, Sefa 01 December 2004 (has links) (PDF)
It has been aimed to investigate the complexity of discrete-coefficient FIR filters when they are implemented in transposed form and the coefficient redundancy is removed by the n-Dimensional Reduced Adder Graph (RAG-n) approach. Filters with coefficients represented by different quantization schemes have been designed or selected from the literture / their transposed form implemetations after RAG-n process have been compared in terms of complexity. A Genetic Algorithm (GA) based design algorithm has been implemented and used for the design of integer coefficient filters. Algorithms for the realization of filter coefficients in Canonic Signed Digit (CSD) form and realization of n-Dimensional Reduced Adder Graph (RAG-n) have also been implemented. Filter performance is measured as Normalized Peak Ripple Magnitude and implementation complexity as the number of adders used to implement filter coefficients. Number of adders used to implement filter coefficients is calculated by using two different methods: CSD and RAG-n. RAG-n method has been applied to FIR digital filter design methods that don&rsquo / t consider reduction of implementation complexity via RAG-n with transposed direct form filter structure. For implementation complexity, it is concluded that &ldquo / RAG-n algorithm with transposed direct form filter structure&rdquo / provides better results over the &ldquo / CSD, SPT coefficient design followed by transposed direct form filter structure&rdquo / in terms of number of adders used in the implementation.

Page generated in 2.9791 seconds