441 |
Adding temporal plasticity to a self-organizing incremental neural network using temporal activity diffusion / Om att utöka ett självorganiserande inkrementellt neuralt nätverk med temporal plasticitet genom temporal aktivitetsdiffusionLundberg, Emil January 2015 (has links)
Vector Quantization (VQ) is a classic optimization problem and a simple approach to pattern recognition. Applications include lossy data compression, clustering and speech and speaker recognition. Although VQ has largely been replaced by time-aware techniques like Hidden Markov Models (HMMs) and Dynamic Time Warping (DTW) in some applications, such as speech and speaker recognition, VQ still retains some significance due to its much lower computational cost — especially for embedded systems. A recent study also demonstrates a multi-section VQ system which achieves performance rivaling that of DTW in an application to handwritten signature recognition, at a much lower computational cost. Adding sensitivity to temporal patterns to a VQ algorithm could help improve such results further. SOTPAR2 is such an extension of Neural Gas, an Artificial Neural Network algorithm for VQ. SOTPAR2 uses a conceptually simple approach, based on adding lateral connections between network nodes and creating “temporal activity” that diffuses through adjacent nodes. The activity in turn makes the nearest-neighbor classifier biased toward network nodes with high activity, and the SOTPAR2 authors report improvements over Neural Gas in an application to time series prediction. This report presents an investigation of how this same extension affects quantization and prediction performance of the self-organizing incremental neural network (SOINN) algorithm. SOINN is a VQ algorithm which automatically chooses a suitable codebook size and can also be used for clustering with arbitrary cluster shapes. This extension is found to not improve the performance of SOINN, in fact it makes performance worse in all experiments attempted. A discussion of this result is provided, along with a discussion of the impact of the algorithm parameters, and possible future work to improve the results is suggested. / Vektorkvantisering (VQ; eng: Vector Quantization) är ett klassiskt problem och en enkel metod för mönsterigenkänning. Bland tillämpningar finns förstörande datakompression, klustring och igenkänning av tal och talare. Även om VQ i stort har ersatts av tidsmedvetna tekniker såsom dolda Markovmodeller (HMM, eng: Hidden Markov Models) och dynamisk tidskrökning (DTW, eng: Dynamic Time Warping) i vissa tillämpningar, som tal- och talarigenkänning, har VQ ännu viss relevans tack vare sin mycket lägre beräkningsmässiga kostnad — särskilt för exempelvis inbyggda system. En ny studie demonstrerar också ett VQ-system med flera sektioner som åstadkommer prestanda i klass med DTW i en tillämpning på igenkänning av handskrivna signaturer, men till en mycket lägre beräkningsmässig kostnad. Att dra nytta av temporala mönster i en VQ-algoritm skulle kunna hjälpa till att förbättra sådana resultat ytterligare. SOTPAR2 är en sådan utökning av Neural Gas, en artificiell neural nätverk-algorithm för VQ. SOTPAR2 använder en konceptuellt enkel idé, baserad på att lägga till sidleds anslutningar mellan nätverksnoder och skapa “temporal aktivitet” som diffunderar genom anslutna noder. Aktiviteten gör sedan så att närmaste-granne-klassificeraren föredrar noder med hög aktivitet, och författarna till SOTPAR2 rapporterar förbättrade resultat jämfört med Neural Gas i en tillämpning på förutsägning av en tidsserie. I denna rapport undersöks hur samma utökning påverkar kvantiserings- och förutsägningsprestanda hos algoritmen självorganiserande inkrementellt neuralt nätverk (SOINN, eng: self-organizing incremental neural network). SOINN är en VQ-algorithm som automatiskt väljer en lämplig kodboksstorlek och också kan användas för klustring med godtyckliga klusterformer. Experimentella resultat visar att denna utökning inte förbättrar prestandan hos SOINN, istället försämrades prestandan i alla experiment som genomfördes. Detta resultat diskuteras, liksom inverkan av parametervärden på prestandan, och möjligt framtida arbete för att förbättra resultaten föreslås.
|
442 |
Design, Analysis, and Applications of Approximate Arithmetic ModulesUllah, Salim 06 April 2022 (has links)
From the initial computing machines, Colossus of 1943 and ENIAC of 1945, to modern high-performance data centers and Internet of Things (IOTs), four design goals, i.e., high-performance, energy-efficiency, resource utilization, and ease of programmability, have remained a beacon of development for the computing industry. During this period, the computing industry has exploited the advantages of technology scaling and microarchitectural enhancements to achieve these goals. However, with the end of Dennard scaling, these techniques have diminishing energy and performance advantages. Therefore, it is necessary to explore alternative techniques for satisfying the computational and energy requirements of modern applications. Towards this end, one promising technique is analyzing and surrendering the strict notion of correctness in various layers of the computation stack. Most modern applications across the computing spectrum---from data centers to IoTs---interact and analyze real-world data and take decisions accordingly. These applications are broadly classified as Recognition, Mining, and Synthesis (RMS). Instead of producing a single golden answer, these applications produce several feasible answers. These applications possess an inherent error-resilience to the inexactness of processed data and corresponding operations. Utilizing these applications' inherent error-resilience, the paradigm of Approximate Computing relaxes the strict notion of computation correctness to realize high-performance and energy-efficient systems with acceptable quality outputs.
The prior works on circuit-level approximations have mainly focused on Application-specific Integrated Circuits (ASICs). However, ASIC-based solutions suffer from long time-to-market and high-cost developing cycles. These limitations of ASICs can be overcome by utilizing the reconfigurable nature of Field Programmable Gate Arrays (FPGAs). However, due to architectural differences between ASICs and FPGAs, the utilization of ASIC-based approximation techniques for FPGA-based systems does not result in proportional performance and energy gains. Therefore, to exploit the principles of approximate computing for FPGA-based hardware accelerators for error-resilient applications, FPGA-optimized approximation techniques are required. Further, most state-of-the-art approximate arithmetic operators do not have a generic approximation methodology to implement new approximate designs for an application's changing accuracy and performance requirements. These works also lack a methodology where a machine learning model can be used to correlate an approximate operator with its impact on the output quality of an application. This thesis focuses on these research challenges by designing and exploring FPGA-optimized logic-based approximate arithmetic operators. As multiplication operation is one of the computationally complex and most frequently used arithmetic operations in various modern applications, such as Artificial Neural Networks (ANNs), we have, therefore, considered it for most of the proposed approximation techniques in this thesis.
The primary focus of the work is to provide a framework for generating FPGA-optimized approximate arithmetic operators and efficient techniques to explore approximate operators for implementing hardware accelerators for error-resilient applications.
Towards this end, we first present various designs of resource-optimized, high-performance, and energy-efficient accurate multipliers. Although modern FPGAs host high-performance DSP blocks to perform multiplication and other arithmetic operations, our analysis and results show that the orthogonal approach of having resource-efficient and high-performance multipliers is necessary for implementing high-performance accelerators. Due to the differences in the type of data processed by various applications, the thesis presents individual designs for unsigned, signed, and constant multipliers. Compared to the multiplier IPs provided by the FPGA Synthesis tool, our proposed designs provide significant performance gains. We then explore the designed accurate multipliers and provide a library of approximate unsigned/signed multipliers. The proposed approximations target the reduction in the total utilized resources, critical path delay, and energy consumption of the multipliers. We have explored various statistical error metrics to characterize the approximation-induced accuracy degradation of the approximate multipliers. We have also utilized the designed multipliers in various error-resilient applications to evaluate their impact on applications' output quality and performance.
Based on our analysis of the designed approximate multipliers, we identify the need for a framework to design application-specific approximate arithmetic operators. An application-specific approximate arithmetic operator intends to implement only the logic that can satisfy the application's overall output accuracy and performance constraints.
Towards this end, we present a generic design methodology for implementing FPGA-based application-specific approximate arithmetic operators from their accurate implementations according to the applications' accuracy and performance requirements. In this regard, we utilize various machine learning models to identify feasible approximate arithmetic configurations for various applications. We also utilize different machine learning models and optimization techniques to efficiently explore the large design space of individual operators and their utilization in various applications. In this thesis, we have used the proposed methodology to design approximate adders and multipliers.
This thesis also explores other layers of the computation stack (cross-layer) for possible approximations to satisfy an application's accuracy and performance requirements. Towards this end, we first present a low bit-width and highly accurate quantization scheme for pre-trained Deep Neural Networks (DNNs). The proposed quantization scheme does not require re-training (fine-tuning the parameters) after quantization. We also present a resource-efficient FPGA-based multiplier that utilizes our proposed quantization scheme. Finally, we present a framework to allow the intelligent exploration and highly accurate identification of the feasible design points in the large design space enabled by cross-layer approximations. The proposed framework utilizes a novel Polynomial Regression (PR)-based method to model approximate arithmetic operators. The PR-based representation enables machine learning models to better correlate an approximate operator's coefficients with their impact on an application's output quality.:1. Introduction
1.1 Inherent Error Resilience of Applications
1.2 Approximate Computing Paradigm
1.2.1 Software Layer Approximation
1.2.2 Architecture Layer Approximation
1.2.3 Circuit Layer Approximation
1.3 Problem Statement
1.4 Focus of the Thesis
1.5 Key Contributions and Thesis Overview
2. Preliminaries
2.1 Xilinx FPGA Slice Structure
2.2 Multiplication Algorithms
2.2.1 Baugh-Wooley’s Multiplication Algorithm
2.2.2 Booth’s Multiplication Algorithm
2.2.3 Sign Extension for Booth’s Multiplier
2.3 Statistical Error Metrics
2.4 Design Space Exploration and Optimization Techniques
2.4.1 Genetic Algorithm
2.4.2 Bayesian Optimization
2.5 Artificial Neural Networks
3. Accurate Multipliers
3.1 Introduction
3.2 Related Work
3.3 Unsigned Multiplier Architecture
3.4 Motivation for Signed Multipliers
3.5 Baugh-Wooley’s Multiplier
3.6 Booth’s Algorithm-based Signed Multipliers
3.6.1 Booth-Mult Design
3.6.2 Booth-Opt Design
3.6.3 Booth-Par Design
3.7 Constant Multipliers
3.8 Results and Discussion
3.8.1 Experimental Setup and Tool Flow
3.8.2 Performance comparison of the proposed accurate unsigned multiplier
3.8.3 Performance comparison of the proposed accurate signed multiplier with the state-of-the-art accurate multipliers
3.8.4 Performance comparison of the proposed constant multiplier with the state-of-the-art accurate multipliers
3.9 Conclusion
4. Approximate Multipliers
4.1 Introduction
4.2 Related Work
4.3 Unsigned Approximate Multipliers
4.3.1 Approximate 4 × 4 Multiplier (Approx-1)
4.3.2 Approximate 4 × 4 Multiplier (Approx-2)
4.3.3 Approximate 4 × 4 Multiplier (Approx-3)
4.4 Designing Higher Order Approximate Unsigned Multipliers
4.4.1 Accurate Adders for Implementing 8 × 8 Approximate Multipliers from 4 × 4 Approximate Multipliers
4.4.2 Approximate Adders for Implementing Higher-order Approximate Multipliers
4.5 Approximate Signed Multipliers (Booth-Approx)
4.6 Results and Discussion
4.6.1 Experimental Setup and Tool Flow
4.6.2 Evaluation of the Proposed Approximate Unsigned Multipliers
4.6.3 Evaluation of the Proposed Approximate Signed Multiplier
4.7 Conclusion
5. Designing Application-specific Approximate Operators
5.1 Introduction
5.2 Related Work
5.3 Modeling Approximate Arithmetic Operators
5.3.1 Accurate Multiplier Design
5.3.2 Approximation Methodology
5.3.3 Approximate Adders
5.4 DSE for FPGA-based Approximate Operators Synthesis
5.4.1 DSE using Bayesian Optimization
5.4.2 MOEA-based Optimization
5.4.3 Machine Learning Models for DSE
5.5 Results and Discussion
5.5.1 Experimental Setup and Tool Flow
5.5.2 Accuracy-Performance Analysis of Approximate Adders
5.5.3 Accuracy-Performance Analysis of Approximate Multipliers
5.5.4 AppAxO MBO
5.5.5 ML Modeling
5.5.6 DSE using ML Models
5.5.7 Proposed Approximate Operators
5.6 Conclusion
6. Quantization of Pre-trained Deep Neural Networks
6.1 Introduction
6.2 Related Work
6.2.1 Commonly Used Quantization Techniques
6.3 Proposed Quantization Techniques
6.3.1 L2L: Log_2_Lead Quantization
6.3.2 ALigN: Adaptive Log_2_Lead Quantization
6.3.3 Quantitative Analysis of the Proposed Quantization Schemes
6.3.4 Proposed Quantization Technique-based Multiplier
6.4 Results and Discussion
6.4.1 Experimental Setup and Tool Flow
6.4.2 Image Classification
6.4.3 Semantic Segmentation
6.4.4 Hardware Implementation Results
6.5 Conclusion
7. A Framework for Cross-layer Approximations
7.1 Introduction
7.2 Related Work
7.3 Error-analysis of approximate arithmetic units
7.3.1 Application Independent Error-analysis of Approximate Multipliers
7.3.2 Application Specific Error Analysis
7.4 Accelerator Performance Estimation
7.5 DSE Methodology
7.6 Results and Discussion
7.6.1 Experimental Setup and Tool Flow
7.6.2 Behavioral Analysis
7.6.3 Accelerator Performance Estimation
7.6.4 DSE Performance
7.7 Conclusion
8. Conclusions and Future Work
|
443 |
Apprentissage basé sur le Qini pour la prédiction de l’effet causal conditionnelBelbahri, Mouloud-Beallah 08 1900 (has links)
Les modèles uplift (levier en français) traitent de l'inférence de cause à effet pour un facteur spécifique, comme une intervention de marketing. En pratique, ces modèles sont construits sur des données individuelles issues d'expériences randomisées. Un groupe traitement comprend des individus qui font l'objet d'une action; un groupe témoin sert de comparaison. La modélisation uplift est utilisée pour ordonner les individus par rapport à la valeur d'un effet causal, par exemple, positif, neutre ou négatif.
Dans un premier temps, nous proposons une nouvelle façon d'effectuer la sélection de modèles pour la régression uplift. Notre méthodologie est basée sur la maximisation du coefficient Qini. Étant donné que la sélection du modèle correspond à la sélection des variables, la tâche est difficile si elle est effectuée de manière directe lorsque le nombre de variables à prendre en compte est grand. Pour rechercher de manière réaliste un bon modèle, nous avons conçu une méthode de recherche basée sur une exploration efficace de l'espace des coefficients de régression combinée à une pénalisation de type lasso de la log-vraisemblance. Il n'y a pas d'expression analytique explicite pour la surface Qini, donc la dévoiler n'est pas facile. Notre idée est de découvrir progressivement la surface Qini comparable à l'optimisation sans dérivée. Le but est de trouver un maximum local raisonnable du Qini en explorant la surface près des valeurs optimales des coefficients pénalisés. Nous partageons ouvertement nos codes à travers la librairie R tools4uplift. Bien qu'il existe des méthodes de calcul disponibles pour la modélisation uplift, la plupart d'entre elles excluent les modèles de régression statistique. Notre librairie entend combler cette lacune. Cette librairie comprend des outils pour: i) la discrétisation, ii) la visualisation, iii) la sélection de variables, iv) l'estimation des paramètres et v) la validation du modèle. Cette librairie permet aux praticiens d'utiliser nos méthodes avec aise et de se référer aux articles méthodologiques afin de lire les détails.
L'uplift est un cas particulier d'inférence causale. L'inférence causale essaie de répondre à des questions telle que « Quel serait le résultat si nous donnions à ce patient un traitement A au lieu du traitement B? ». La réponse à cette question est ensuite utilisée comme prédiction pour un nouveau patient. Dans la deuxième partie de la thèse, c’est sur la prédiction que nous avons davantage insisté. La plupart des approches existantes sont des adaptations de forêts aléatoires pour le cas de l'uplift. Plusieurs critères de segmentation ont été proposés dans la littérature, tous reposant sur la maximisation de l'hétérogénéité. Cependant, dans la pratique, ces approches sont sujettes au sur-ajustement. Nous apportons une nouvelle vision pour améliorer la prédiction de l'uplift. Nous proposons une nouvelle fonction de perte définie en tirant parti d'un lien avec l'interprétation bayésienne du risque relatif. Notre solution est développée pour une architecture de réseau de neurones jumeaux spécifique permettant d'optimiser conjointement les probabilités marginales de succès pour les individus traités et non-traités. Nous montrons que ce modèle est une généralisation du modèle d'interaction logistique de l'uplift. Nous modifions également l'algorithme de descente de gradient stochastique pour permettre des solutions parcimonieuses structurées. Cela aide dans une large mesure à ajuster nos modèles uplift. Nous partageons ouvertement nos codes Python pour les praticiens désireux d'utiliser nos algorithmes.
Nous avons eu la rare opportunité de collaborer avec l'industrie afin d'avoir accès à des données provenant de campagnes de marketing à grande échelle favorables à l'application de nos méthodes. Nous montrons empiriquement que nos méthodes sont compétitives avec l'état de l'art sur les données réelles ainsi qu'à travers plusieurs scénarios de simulations. / Uplift models deal with cause-and-effect inference for a specific factor, such as a marketing intervention. In practice, these models are built on individual data from randomized experiments. A targeted group contains individuals who are subject to an action; a control group serves for comparison. Uplift modeling is used to order the individuals with respect to the value of a causal effect, e.g., positive, neutral, or negative.
First, we propose a new way to perform model selection in uplift regression models. Our methodology is based on the maximization of the Qini coefficient. Because model selection corresponds to variable selection, the task is haunting and intractable if done in a straightforward manner when the number of variables to consider is large. To realistically search for a good model, we conceived a searching method based on an efficient exploration of the regression coefficients space combined with a lasso penalization of the log-likelihood. There is no explicit analytical expression for the Qini surface, so unveiling it is not easy. Our idea is to gradually uncover the Qini surface in a manner inspired by surface response designs. The goal is to find a reasonable local maximum of the Qini by exploring the surface near optimal values of the penalized coefficients. We openly share our codes through the R Package tools4uplift. Though there are some computational methods available for uplift modeling, most of them exclude statistical regression models. Our package intends to fill this gap. This package comprises tools for: i) quantization, ii) visualization, iii) variable selection, iv) parameters estimation and v) model validation. This library allows practitioners to use our methods with ease and to refer to methodological papers in order to read the details.
Uplift is a particular case of causal inference. Causal inference tries to answer questions such as ``What would be the result if we gave this patient treatment A instead of treatment B?" . The answer to this question is then used as a prediction for a new patient. In the second part of the thesis, it is on the prediction that we have placed more emphasis. Most existing approaches are adaptations of random forests for the uplift case. Several split criteria have been proposed in the literature, all relying on maximizing heterogeneity. However, in practice, these approaches are prone to overfitting. In this work, we bring a new vision to uplift modeling. We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk. Our solution is developed for a specific twin neural network architecture allowing to jointly optimize the marginal probabilities of success for treated and control individuals. We show that this model is a generalization of the uplift logistic interaction model. We modify the stochastic gradient descent algorithm to allow for structured sparse solutions. This helps fitting our uplift models to a great extent. We openly share our Python codes for practitioners wishing to use our algorithms.
We had the rare opportunity to collaborate with industry to get access to data from large-scale marketing campaigns favorable to the application of our methods. We show empirically that our methods are competitive with the state of the art on real data and through several simulation setting scenarios.
|
444 |
Řízení dynamických systémů v reálném čase / Real Time Dynamic System ControlAdamík, Pavel January 2009 (has links)
This thesis focuses on the methodology of controlling dynamic systems in real time. It contents a review of the control theory basis and the elementary base of regulators construction. Then the list of matemathic formulaes follows as well as the math basis for the system simulations using a difeerential count and the problem of difeerential equations solving. Furthermore, there is a systematic approach to the design of general regulator enclosed, using modern simulation techniques. After the results confirmation in the Matlab system, the problematics of transport delay & quantization modelling follow.
|
445 |
Contributions à la sonification d’image et à la classification de sonsToffa, Ohini Kafui 11 1900 (has links)
L’objectif de cette thèse est d’étudier d’une part le problème de sonification d’image
et de le solutionner à travers de nouveaux modèles de correspondance entre domaines
visuel et sonore. D’autre part d’étudier le problème de la classification de son et de le résoudre
avec des méthodes ayant fait leurs preuves dans le domaine de la reconnaissance
d’image.
La sonification d’image est la traduction de données d’image (forme, couleur, texture,
objet) en sons. Il est utilisé dans les domaines de l’assistance visuelle et de l’accessibilité
des images pour les personnes malvoyantes. En raison de sa complexité, un
système de sonification d’image qui traduit correctement les données d’image en son de
manière intuitive n’est pas facile à concevoir.
Notre première contribution est de proposer un nouveau système de sonification
d’image de bas-niveau qui utilise une approche hiérarchique basée sur les caractéristiques
visuelles. Il traduit, à l’aide de notes musicales, la plupart des propriétés d’une
image (couleur, gradient, contour, texture, région) vers le domaine audio, de manière
très prévisible et donc est facilement ensuite décodable par l’être humain.
Notre deuxième contribution est une application Android de sonification de haut
niveau qui est complémentaire à notre première contribution car elle implémente la traduction
des objets et du contenu sémantique de l’image. Il propose également une base
de données pour la sonification d’image.
Finalement dans le domaine de l’audio, notre dernière contribution généralise le motif
binaire local (LBP) à 1D et le combine avec des descripteurs audio pour faire de
la classification de sons environnementaux. La méthode proposée surpasse les résultats
des méthodes qui utilisent des algorithmes d’apprentissage automatique classiques et
est plus rapide que toutes les méthodes de réseau neuronal convolutif. Il représente un
meilleur choix lorsqu’il y a une rareté des données ou une puissance de calcul minimale. / The objective of this thesis is to study on the one hand the problem of image sonification
and to solve it through new models of mapping between visual and sound domains.
On the other hand, to study the problem of sound classification and to solve it with
methods which have proven track record in the field of image recognition.
Image sonification is the translation of image data (shape, color, texture, objects)
into sounds. It is used in vision assistance and image accessibility domains for visual
impaired people. Due to its complexity, an image sonification system that properly conveys
the image data to sound in an intuitive way is not easy to design.
Our first contribution is to propose a new low-level image sonification system which
uses an hierarchical visual feature-based approach to translate, usingmusical notes, most
of the properties of an image (color, gradient, edge, texture, region) to the audio domain,
in a very predictable way in which is then easily decodable by the human being.
Our second contribution is a high-level sonification Android application which is
complementary to our first contribution because it implements the translation to the audio
domain of the objects and the semantic content of an image. It also proposes a dataset
for an image sonification.
Finally, in the audio domain, our third contribution generalizes the Local Binary
Pattern (LBP) to 1D and combines it with audio features for an environmental sound
classification task. The proposed method outperforms the results of methods that uses
handcrafted features with classical machine learning algorithms and is faster than any
convolutional neural network methods. It represents a better choice when there is data
scarcity or minimal computing power.
|
446 |
Design of a grating lobe mitigated antenna array architecture integrated with low loss PCB filtering structures / Design av en sidloblindrande gruppantenn integrerad med låg förlust PCBfilterstrukturerSalvador Lopez, Eduardo January 2023 (has links)
Massive multiple input multiple output - MIMO systems are a reality and modern communication systems rely upon this technology to cope with the increasing need for capacity and network usage. Antenna arrays are at the heart of the of the massive-MIMO system and are the enabling technology. The defining cost of such a system is the number of transmit receive ports TRx as they dictate the number of control points and the associated digital control computational capacity. Typically users are spread along the azimuth and there is limited angular user spread along elevation. This enables us to group the elements in elevation which of course limits the elevation scanning performance. The element grouping result in grating lobes when we do elevation scanning. In the newly introduced frequency range 3 - FR3 in the envisioned 6G communication systems that is from 6-20 GHz it will not be allowed to transmit power above the horizon and the resulting grating lobes from the standard grouping should be mitigated. This project is structured into two parts. In the first part a grating lobe mitigation technique based on irregular subarray grouping utilizing the wellknown Penrose irregular tessellation is developed. This tessellation is based into two geometrical shapes where when put together they can fully tile the space aperiodically. Introducing this apperiodicity the grating or quantization lobes of the array are mitigated. In addition, in the first part a beam forming algorithm is developed based on particle swarm optimization that is able to produce the optimal weights for the array steering as well as optimize some of the embedded patterns of the irregular grouping. The last optimization step of the irregular subarray patterns is utilized only when the grouping results in a narrow pattern in azimuth and as a result we have static single port beamforming networks. This of course is a trade off between the broadside gain and the azimuth steerability of the array. In the second part of this thesis two low loss band pass filters have been developed with a PCB integrated suspended stripline techology. The filters were optimised for the frequencies within FR3. The resulted filtering structures can further be integrated at the input port of the proposed feeding network with the same technology. The two parts of this thesis target to introduce on one hand a antenna array architecture with subarray groupings that produce no grating lobes and on the other hand the proposed filtering structures have small enough dimensions to fit within the subarray footprint. / Dagens moderna kommunikationssystem använder sig av Massive multiple input multiple output (m-MIMO) för att kunna möta det allt större kraven på kapacitet och nätverksanvändning. Gruppantenner är den mest fundamentala delen av massive-MIMO system och möjliggör dess funktion. För ett sådant system (m-MIMO-system), så kommer den största kostnaden från antalet sändare/mottagare (TRx) -portar som används. Antalet portar i ett massiveMIMO system bestämmer vilken kapacitet systemet har till hands när det gäller lobformning. Vanligtvis är användare utspridda i det horisontella planet, samtidigt som de är begränsade i sin spridning i höjdled. Detta möjliggör användandet av en gruppantenn som grupperar sina antennelement i höjdled, vilket såklart begränsar gruppantennens lobformning i höjdled. Grupperandet av antennelement skapar sidlober när gruppantennen lobformar i höjdled. I det nya frekvensbandet, 3 - FR3 i det föreställda 6G kommunikationssystemet som opererar mellan 6-20 GHz, så kommer det inte att vara tillåtet att sända ut effekt över horisonten, samtidigt som de sidlober som kommer från standardgruppering måste begränsas. Detta projekt är strukturerat i två delar. I första delen så presenteras ett sätt att lindra sidlober, som baseras på irreguljära gruppantenner via Penrose tessellation. Denna tessellation är indelad i två geometriska former sådan att när vi sätter ihop dem så kan de framgångsrikt täcka vår geometri icke-periodvist. Genom att introducera denna icke-periodicitet så kan sidloberna från gruppanetnnen lindras. Utöver detta så är också så är en lobformningsalgoritm skapad som baseras på particle swarm optimization (PSO), som kan skapa de optimala vikterna för lobformning och lobstyrning. Det sista optimiseringssteget av de irreguljära gruppantennmönstret används bara när gruppering av antennelement resulterar i ett snävt mönster i azimut-riktning. Därför använder vi ett statiskt enportsmatningsnätverk. Detta är såklart en vägning mellan bredsideförstärkning och förmågan att kunna lobforma i det horisontella planet. I den andra delen så har två låg förlust bandpassfilter utvecklats med en PCB-integrerad suspended sripline teknik. Filtrerna optimerades för frekvenser inom FR3. De resulterande filterstrukturerna kan integreras längs input-porten av det föreslagna matningsnätverket som använder sig av den samma teknik. De två delarna i denna uppsats presenterar dels en gruppantenn med irreguljär antennelementsindelning som lindrar sidlober, samt dels filterstrukturer som kan användas tillsammans med gruppantennen.
|
447 |
Coupled-Cluster in Real SpaceKottmann, Jakob Siegfried 24 August 2018 (has links)
In dieser Arbeit werden Algorithmen für die Berechnung elektronischer Korrelations- und
Anregungsenergien mittels der Coupled-Cluster Methode auf adaptiven Gittern entwickelt
und implementiert. Die jeweiligen Funktionen und Operatoren werden adaptiv durch
Multiskalenanalyse dargestellt, was eine Basissatz unabängige Beschreibung mit kontrollierter
numerischer Genauigkeit ermöglicht. Gleichungen für die Coupled-Cluster Methode
werden in einem verallgemeinerten Rahmen, unabhängig von virtuellen Orbitalen
und globalen Basissätzen, neu formuliert. Hierzu werden die amplitudengewichteten
Anregungen in virtuelle Orbitale ersetzt durch Anregungen in n-Elektronenfunktionen,
welche durch Gleichungen im n-Elektronen Ortsraum bestimmt sind. Die erhaltenen
Gleichungen können, analog zur Basissatz abh¨angigen Form, mit leicht angepasster Interpretation
diagrammatisch dargestellt werden. Aufgrund des singulären Coulomb Potentials
werden die Arbeitsgleichungen mit einem explizit korrelierten Ansatz regularisiert.
Coupled-Cluster singles mit genäherten doubles (CC2) und ähnliche Modelle werden,
für geschlossenschalige Systeme und in regularisierter Form, in die MADNESS Bibliothek
(eine allgemeine Bibliothek zur Darstellung von Funktionen und Operatoren mittels
Multiskalenanalyse) implementiert. Mit der vorgestellten Methode können elektronische
CC2 Paarkorrelationsenergien und Anregungsenergien mit bestimmter numerischer
Genauigkeit unabhängig von globalen Basissätzen berechnet werden, was anhand von
kleinen Molekülen verifiziert wird / In this work algorithms for the computation of electronic correlation and excitation energies
with the Coupled-Cluster method on adaptive grids are developed and implemented.
The corresponding functions and operators are adaptively represented with multiresolution
analysis allowing a basis-set independent description with controlled numerical
accuracy. Equations for the coupled-cluster model are reformulated in a generalized
framework independent of virtual orbitals and global basis-sets. For this, the amplitude
weighted excitations into virtuals are replaced by excitations into n-electron functions
which are determined by projected equations in the n-electron position space. The resulting
equations can be represented diagrammatically analogous to basis-set dependent
approaches with slightly adjusted rules of interpretation. Due to the singular Coulomb
potential, the working equations are regularized with an explicitly correlated ansatz.
Coupled-cluster singles with approximate doubles (CC2) and similar models are implemented
for closed-shell systems and in regularized form into the MADNESS library
(a general library for the representation of functions and operators with multiresolution
analysis). With the presented approach electronic CC2 pair-correlation energies
and excitation energies can be computed with definite numerical accuracy and without
dependence on global basis sets, which is verified on small molecules.
|
448 |
Modelling of Mobile Fading Channels with Fading Mitigation Techniques.Shang, Lei, lei.shang@ieee.org January 2006 (has links)
This thesis aims to contribute to the developments of wireless communication systems. The work generally consists of three parts: the first part is a discussion on general digital communication systems, the second part focuses on wireless channel modelling and fading mitigation techniques, and in the third part we discuss the possible application of advanced digital signal processing, especially time-frequency representation and blind source separation, to wireless communication systems. The first part considers general digital communication systems which will be incorporated in later parts. Today's wireless communication system is a subbranch of a general digital communication system that employs various techniques of A/D (Analog to Digital) conversion, source coding, error correction, coding, modulation, and synchronization, signal detection in noise, channel estimation, and equalization. We study and develop the digital communication algorithms to enhance the performance of wireless communication systems. In the Second Part we focus on wireless channel modelling and fading mitigation techniques. A modified Jakes' method is developed for Rayleigh fading channels. We investigate the level-crossing rate (LCR), the average duration of fades (ADF), the probability density function (PDF), the cumulative distribution function (CDF) and the autocorrelation functions (ACF) of this model. The simulated results are verified against the analytical Clarke's channel model. We also construct frequency-selective geometrical-based hyperbolically distributed scatterers (GBHDS) for a macro-cell mobile environment with the proper statistical characteristics. The modified Clarke's model and the GBHDS model may be readily expanded to a MIMO channel model thus we study the MIMO fading channel, specifically we model the MIMO channel in the angular domain. A detailed analysis of Gauss-Markov approximation of the fading channel is also given. Two fading mitigation techniques are investigated: Orthogonal Frequency Division Multiplexing (OFDM) and spatial diversity. In the Third Part, we devote ourselves to the exciting fields of Time-Frequency Analysis and Blind Source Separation and investigate the application of these powerful Digital Signal Processing (DSP) tools to improve the performance of wireless communication systems.
|
449 |
Fases geométricas, quantização de Landau e computação quâantica holonômica para partículas neutras na presença de defeitos topológicosBakke Filho, Knut 06 August 2009 (has links)
Made available in DSpace on 2015-05-14T12:14:06Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 1577961 bytes, checksum: c71d976d783495df566e0fa6baadf8ca (MD5)
Previous issue date: 2009-08-06 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / We start this work studying the appearance of geometric quantum phases as in the relativistic
as in the non-relativistic quantum dynamics of a neutral particle with permanent
magnetic and electric dipole moment which interacts with external electric and magnetic
fields in the presence of linear topological defects. We describe the linear topological
defects using the approach proposed by Katanaev and Volovich, where the topological
defects in solids are described by line elements which are solutions of the Einstein's equations
in the context of general relativity. We also analyze the in
uence of non-inertial
effects in the quantum dynamics of a neutral particle using two distinct reference frames
for the observers: one is the Fermi-Walker reference frame and another is a rotating frame.
As a result, we shall see that the difference between these two reference frames is in the
presence/absence of dragging effects of the spacetime which makes its in
uence on the
phase shift of the wave function of the neutral particle. In the following, we shall use our
study of geometric quantum phases to make an application on the Holonomic Quantum
Computation, where we shall show a new approach to implement the Holonomic Quantum
Computation via the interaction between the dipole moments of the neutral particle
and external fields and the presence of linear topological defects. Another applications for
the Holonomic Quantum Computation is based in the structure of the topological defects
in graphene layers. In the presence of topological defects, a graphene layer shows two
distinct phase shifts: one comes from the mix of Fermi points while the other phase shift
comes from the topology of the defect. To provide a geometric description for each phase
shift in the graphene layer, we use the Kaluza-Klein theory where we establish that the
extra dimension describes the Fermi points in the graphene layer. Hence, we can implement
the Holonomic Quantum Computation through the possibility to build cones and
anticones of graphite in such way we can control the quantum
uxes in graphene layers.
In the last part of this work, we study the Landau quantization for neutral particles as in
the relativistic dynamics and non-relativistic dynamics. In the non-relativistic dynamics,
we study the Landau quantization in the presence of topological defects as in an inertial
as in a non-inertial reference frame. In the relativistic quantum dynamics, we start our
study with the Landau quantization in the Minkowisky considering two different gauge
fields. At the end, we study the relativistic Landau quantization for neutral particles in
the Cosmic Dislocation spacetime. / Neste trabalho estudamos inicialmente o surgimento de fases geometricas nas dinâmicas quânticas relativística e não-relativística de uma partícula neutra que possui momento de
dipolo magnético e elétrico permanente interagindo com campos elétricos e magnéticos externos
na presença de defeitos topológicos lineares. Para descrevermos defeitos topológicos
lineares usamos a aproximação proposta por Katanaev e Volovich, onde defeitos lineares em sólidos são descritos por elementos de linha que são soluções das equações de Einstein
no contexto da relatividade geral. Analisamos também a
inuência de efeitos não-inerciais na dinâmica quântica de uma partícula neutra em dois tipos distintos de referenciais para
os observadores: um é o referencial de Fermi-Walker e outro é um referencial girante.
Vemos que a diferença entre dois referenciais está na presença/ausência de efeitos de arrasto
do espaço-tempo que irá influenciar diretamente na mudança de fase na funçãao de
onda da partícula neutra. Em seguida, usamos nosso estudo de fases geométricas para
fazer aplicações na Computação Quântica Holonômica onde mostramos uma nova maneira de implementar a Computação Quântica Holonômica através da interação entre momentos
de dipolo e campos externos e pela presença de defeitos topológicos lineares. Outra
aplicação para a Computação Quântica Holonômica está baseada na estrutura de defeitos
topológicos em um material chamado grafeno. Na presença de defeitos topológicos lineares,
esse material apresenta duas fases quânticas de origens distintas: uma da mistura
dos pontos de Fermi e outra da topologia do defeito. Para dar uma descrição geométrica para a origem de cada fase no grafeno usamos a Teoria de Kaluza-Klein, onde a dimensão extra sugerida por esta teoria descreve os pontos de Fermi no grafeno. Portanto, a implementação da Computação Quântica Holonômica no grafeno está baseada na possibilidade
de construir cones e anticones de grafite de tal maneira que se possa controlar os fluxos
quânticos no grafeno. Na última parte deste trabalho estudamos a quantização de Landau
para partículas neutras tanto na dinâmica não-relativística quanto na dinâmica relativística. Na dinâmica não-relativítica, estudamos a quantização de Landau na presença
de defeitos em um referecial inercial e, em seguida, em um referencial nãoo-inercial. Na
dinâmica relativística, estudamos inicialmente a quantização de Landau no espaço-tempo
plano em duas configurações de campos diferentes. Por fim, estudamos a quantização de
Landau relativística para partículas neutras no espaço-tempo da deslocação cósmica.
|
450 |
Praktické ukázky zpracování signálů / Practical examples of signal processingHanzálek, Pavel January 2019 (has links)
The thesis focuses on the issue of signal processing. Using practical examples, it tries to show the use of individual signal processing operations from a practical point of view. For each of the selected signal processing operations, an application is created in MATLAB, including a graphical interface for easier operation. The division of the thesis is such that each chapter is first analyzed from a theoretical point of view, then it is shown using a practical demonstration of what the operation is used in practice. Individual applications are described here, mainly in terms of how they are handled and their possible results. The results of the practical part are presented in the attachment of the thesis.
|
Page generated in 0.0195 seconds