Spelling suggestions: "subject:"beyond"" "subject:"deyond""
261 |
The Political Implications of Nietzsche's PerspectivismEtro-Beko, Tansy Anada 30 November 2018 (has links)
In the first chapter of my doctoral thesis, entitled The Political Implications of Nietzsche's Perspectivism, I argue that due to conflicting passages present throughout his oeuvre, Nietzsche is best understood as a twofold metaphysical sceptic. That is, a sceptic about the existence of the external world, and consequently, as a sceptic about such a world's correspondence to our perspectives. Nietzsche presents a threefold conceptualization of 'nihilism' and a twofold one of the 'will to power.' Neutral nihilism is humanity's inescapable condition of having no non-humanly created meanings and values. This state can be interpreted positively as an opportunity to create one's own meanings and values, or negatively as a terrifying incentive to return to dogmatism. The will to power is life before and as it becomes life, the unqualified will to power, and all the realities in it, the qualifiable will to power. The combination of these ontological concepts brings me to my second chapter and to the determination of Nietzsche's general epistemology: perspectivism. Perspectivism is an admittedly created, ontologically derived interpretation of knowledge, which both entails and goes beyond relativism. Nietzsche's perspectivism is constructed to support any norm that allows for univocal evaluations, not just Nietzsche's. Moreover, it can be derived from any ontology that conceptualizes life as a unit of growth and decay and human beings as creators of all their perspectives. These two elastic concepts allow me to propose, in my third chapter, that, although his texts disavow an all-inclusive democracy in favour of a new spiritual aristocracy, on the one hand, the proper political implications of perspectivism allow for democracy, while on the other hand, Nietzsche can be read as disapproving of an all inclusive or representative democracy, yet as approving of the direct democracy that arises naturally among elite peers.
|
262 |
Radio resource sharing with edge caching for multi-operator in large cellular networksSanguanpuak, T. (Tachporn) 04 January 2019 (has links)
Abstract
The aim of this thesis is to devise new paradigms on radio resource sharing including cache-enabled virtualized large cellular networks for mobile network operators (MNOs). Also, self-organizing resource allocation for small cell networks is considered.
In such networks, the MNOs rent radio resources from the infrastructure provider (InP) to support their subscribers. In order to reduce the operational costs, while at the same time to significantly increase the usage of the existing network resources, it leads to a paradigm where the MNOs share their infrastructure, i.e., base stations (BSs), antennas, spectrum and edge cache among themselves. In this regard, we integrate the theoretical insights provided by stochastic geometrical approaches to model the spectrum and infrastructure sharing for large cellular networks.
In the first part of the thesis, we study the non-orthogonal multi-MNO spectrum allocation problem for small cell networks with the goal of maximizing the overall network throughput, defined as the expected weighted sum rate of the MNOs. Each MNO is assumed to serve multiple small cell BSs (SBSs). We adopt the many-to-one stable matching game framework to tackle this problem. We also investigate the role of power allocation schemes for SBSs using Q-learning.
In the second part, we model and analyze the infrastructure sharing system considering a single buyer MNO and multiple seller MNOs. The MNOs are assumed to operate over their own licensed spectrum bands while sharing BSs. We assume that multiple seller MNOs compete with each other to sell their infrastructure to a potential buyer MNO. The optimal strategy for the seller MNOs in terms of the fraction of infrastructure to be shared and the price of the infrastructure, is obtained by computing the equilibrium of a Cournot-Nash oligopoly game.
Finally, we develop a game-theoretic framework to model and analyze a cache-enabled virtualized cellular networks where the network infrastructure, e.g., BSs and cache storage, owned by an InP, is rented and shared among multiple MNOs. We formulate a Stackelberg game model with the InP as the leader and the MNOs as the followers. The InP tries to maximize its profit by optimizing its infrastructure rental fee. The MNO aims to minimize the cost of infrastructure by minimizing the cache intensity under probabilistic delay constraint of the user (UE). Since the MNOs share their rented infrastructure, we apply a cooperative game concept, namely, the Shapley value, to divide the cost among the MNOs. / Tiivistelmä
Tämän väitöskirjan tavoitteena on tuottaa uusia paradigmoja radioresurssien jakoon, mukaan lukien virtualisoidut välimuisti-kykenevät suuret matkapuhelinverkot matkapuhelinoperaattoreille. Näiden kaltaisissa verkoissa operaattorit vuokraavat radioresursseja infrastruktuuritoimittajalta (InP, infrastructure provider) asiakkaiden tarpeisiin. Toimintakulujen karsiminen ja samanaikainen olemassa olevien verkkoresurssien hyötykäytön huomattava kasvattaminen johtaa paradigmaan, jossa operaattorit jakavat infrastruktuurinsa keskenään. Tämän vuoksi työssä tutkitaan teoreettisia stokastiseen geometriaan perustuvia malleja spektrin ja infrastruktuurin jakamiseksi suurissa soluverkoissa.
Työn ensimmäisessä osassa tutkitaan ei-ortogonaalista monioperaattori-allokaatioongelmaa pienissä soluverkoissa tavoitteena maksimoida verkon yleistä läpisyöttöä, joka määritellään operaattoreiden painotettuna summaläpisyötön odotusarvona. Jokaisen operaattorin oletetaan palvelevan useampaa piensolutukiasemaa (SBS, small cell base station). Työssä käytetään monelta yhdelle -vakaata sovituspeli-viitekehystä SBS:lle käyttäen Q-oppimista.
Työn toisessa osassa mallinnetaan ja analysoidaan infrastruktuurin jakamista yhden ostaja-operaattorin ja monen myyjä-operaattorin tapauksessa. Operaattorien oletetaan toimivan omilla lisensoiduilla taajuuksillaan jakaen tukiasemat keskenään. Myyjän optimaalinen strategia infrastruktuurin myytävän osan suuruuden ja hinnan suhteen saavutetaan laskemalla Cournot-Nash -olipologipelin tasapainotila.
Lopuksi, työssä kehitetään peli-teoreettinen viitekehys virtualisoitujen välimuistikykenevien soluverkkojen mallintamiseen ja analysointiin, missä InP:n omistama verkkoinfrastruktuuri vuokrataan ja jaetaan monen operaattorin kesken. Työssä muodostetaan Stackelberg-pelimalli, jossa InP toimii johtajana ja operaattorit seuraajina. InP pyrkii maksimoimaan voittonsa optimoimalla infrastruktuurin vuokrahintaa. Operaattori pyrkii minimoimaan infrastruktuurin hinnan minimoimalla välimuistin tiheyttä satunnaisen käyttäjän viive-ehtojen mukaisesti. Koska operaattorit jakavat vuokratun infrastruktuurin, työssä käytetään yhteistyöpeli-ajatusta, nimellisesti, Shapleyn arvoa, jakamaan kustannuksia operaatoreiden kesken.
|
263 |
Physical and tangible information visualization / Visualisation physique et tangible de l'informationJansen, Yvonne 10 March 2014 (has links)
Les visualisations, dans le sens général de représentations externes et physiques de données, sont plus anciennes que l'invention de l'écriture. De manière générale, les représentations externes encouragent la cognition et la pensée visuelle, et nous avons développé des savoir-faire pour les créer et les exploiter. La révolution informatique a augmenté la quantité de données qu'il est possible de collecter et de traiter, et a diversifié les façons de les représenter visuellement. Les systèmes de visualisation assistés par ordinateur, et étudiés dans le domaine de la visualisation d'information, sont aujourd'hui si puissants et complexes que nous avons besoin de techniques d'interaction très sophistiqués. Grâce au développement des possibilités technologiques au-delà des ordinateurs de bureau, un large éventail d'utilisations émerge. Non seulement des surfaces d'affichage de formes et de tailles variées permettent de montrer des visualisations plus riches, mais aussi des dispositifs d'entrée de nouvelle génération peuvent être utilisés qui exploitent les aptitudes humaines à manipuler les objets physiques. Cependant, ces technologies sont peu étudiées dans le contexte de la visualisation d'information. Tout d'abord, un premier problème découle d'une terminologie insuffisante. Dans cette thèse, je définis et étudie entre autres le concept de corporalisation (embodiment) pour la visualisation d'information. Concernant les visualisations, la corporalisation réfère à la congruence entre les éléments visuels d'une visualisation et leurs formes physiques. Ce concept intègre des concepts déjà connus tels que la tangibilité. Par exemple, l'interaction tangible s'attache à la représentation d'objets virtuels par des objets physiques. Mais en réalité, leur forme physique n'est pas nécessairement congruente avec l'objet virtuel. Un second problème découle du peu d'exemples convaincants d'interfaces tangibles appliquées à la visualisation d'information. Dans le domaine de la visualisation d'information, les écrans standard et les dispositifs d'entrée génériques tels que la souris, sont toujours les plus courants et considérés comme les plus efficaces. Cependant, aussi bien la partie affichage que la partie contrôle fournit des possibilités de corporalisation : les dispositifs d'entrée peuvent être spécialisés et adaptés de façon à ce que leur forme physique ressemble à leur fonction; les écrans peuvent être rendus déformables ou, dans l'avenir, être composés d'une matière programmable capable de prendre n'importe quelle forme imaginable. Mais la recherche sur les écrans et matières déformables est pour l'instant principalement dirigée par l'innovation technologique sans tenir compte des applications possibles à la visualisation d'information. Dans cette thèse, j'propose la corporalisation comme principe de conception pour la visualisation d'information. Je démontre l'efficacité et l'utilisabilité des dispositifs d'entrée corporalisés ainsi que des affichages corporalisés, en présentant trois expériences contrôlées. Par la suite, je présente un modèle d'interaction conceptuel et un système de notation visuelle pour décrire, comparer et critiquer différents types de systèmes de visualisation, et j'illustre l'utilisation de ce modèle à partir d'études de cas. Enfin, je présente un outil de conception pour aider à la création de visualisations physiques. Cet outil s'adresse à des utilisateurs novices en visualisation d'information et en fabrication numérique, et peut contribuer à sensibiliser ces utilisateurs à l'intérêt d'explorer des données qui les concernent dans leur vie quotidienne. En résumé, cette thèse contribue à la compréhension de la valeur ajoutée des interfaces physiques pour la visualisation d'information. / Visualizations in the most general sense of external, physical representations of information are older than the invention of writing. Generally, external representations promote external cognition and visual thinking, and humans developed a rich set of skills for crafting and exploring them. Computers immensely increased the amount of data we can collect and process as well as diversified the ways we can represent it visually. Computer-supported visualization systems, studied in the field of information visualization (infovis), have become powerful and complex, and sophisticated interaction techniques are now necessary to control them. With the widening of technological possibilities beyond classic desktop settings, new opportunities have emerged. Not only display surfaces of arbitrary shapes and sizes can be used to show richer visualizations, but also new input technologies can be used to manipulate them. For example, tangible user interfaces are an emerging input technology that capitalizes on humans' abilities to manipulate physical objects. However, these technologies have been barely studied in the field of information visualization. A first problem is a poorly defined terminology. In this dissertation, I define and explore the conceptual space of embodiment for information visualization. For visualizations, embodiment refers to the level of congruence between the visual elements of the visualization and their physical shape. This concept subsumes previously introduced concepts such as tangibility and physicality. For example, tangible computing aims to represent virtual objects through a physical form but the form is not necessarily congruent with the virtual object. A second problem is the scarcity of convincing applications of tangible user interfaces for infovis purposes. In information visualization, standard computer displays and input devices are still widespread and considered as most effective. Both of these provide however opportunities for embodiment: input devices can be specialized and adapted so that their physical shape reflects their functionality within the system; computer displays can be substituted by transformable shape changing displays or, eventually, by programmable matter which can take any physical shape imaginable. Research on such shape-changing interfaces has so far been technology-driven while the utility of such interfaces for information visualization remained unexploited. In this thesis, I suggest embodiment as a design principle for infovis purposes, I demonstrate and validate the efficiency and usability of both embodied visualization controls and embodied visualization displays through three controlled user experiments. I then present a conceptual interaction model and visual notation system that facilitates the description, comparison and criticism of various types of visualization systems and illustrate it through case studies of currently existing point solutions. Finally, to aid the creation of physical visualizations, I present a software tool that supports users in building their own visualizations. The tool is suitable for users new to both visualization and digital fabrication, and can help to increase users' awareness of and interest in data in their everyday live. In summary, this thesis contributes to the understanding of the value of emerging physical representations for information visualization.
|
264 |
Search for Higgs boson decays to beyond-the-Standard-Model light bosons in four-lepton events with the ATLAS detector at the LHCChiu, Justin 22 December 2020 (has links)
This thesis presents the search for the dark sector process h -> Zd Zd -> 4l in events collected by the ATLAS detector at the Large Hadron Collider in 2015--2018. In this theorized process, the Standard Model Higgs boson (h) decays to four leptons via two intermediate Beyond-the-Standard-Model particles each called Zd. This process arises from interactions of the Standard Model with a dark sector. A dark sector consists of one or more new particles that have limited or zero interaction with the Standard Model, such as the new vector boson Zd (dark photon). It could have a rich and interesting phenomenology like the visible sector (the Standard Model) and could naturally address many outstanding problems in particle physics. For example, it could contain a particle candidate for dark matter. In particular, Higgs decays to Beyond-the-Standard-Model particles are well-motivated theoretically and are not tightly constrained; current measurements of Standard Model Higgs properties permit the fraction of such decays to be as high as approximately 30%. The results of this search do not show evidence for the existence of the h -> Zd Zd -> 4l process and are therefore interpreted in terms of upper limits on the branching ratio B(h -> Zd Zd) and the effective Higgs mixing parameter kappa^prime. / Graduate
|
265 |
Recherche de résonances se désintégrant en paire de quarks top-antitop avec l'expérience ATLAS / Search for new resonances decaying into a top-antitop quarks pair with the ATLAS experimentBarbe, William 19 September 2019 (has links)
Le Modèle Standard de la physique des particules décrit trois des quatre interactions fondamentales et toutes ses prédictions ont été confirmées expérimentalement. Cependant, il reste encore des questions auxquelles le Modèle Standard ne peut répondre. Plusieurs pistes théoriques sont explorées et certaines prédisent de nouvelles particules se désintégrant en paire de quarks top-antitop qui pourraient être observé par le détecteur ATLAS auprès du collisionneur LHC.À partir de 2026, le LHC redémarrera après avoir fait l'objet d'une importante phase d'amélioration afin d'augmenter sa luminosité. C'est dans ce contexte que s'inscrivent les études réalisées sur FATALIC, une puce qui a été proposée pour le remplacement de l'électronique frontale du calorimètre hadronique à tuiles d'ATLAS. Les études ont montré que FATALIC est capable de reconstruire les paramètres d'un signal analogique en utilisant trois canaux de gain et un changement de gain dynamique. Les simulations ont démontré que les performances attendues de la voie rapide de FATALIC entraient dans les spécifications demandées.Ensuite, a été présentée une recherche de nouvelles résonances se désintégrant en paire de quarks top-antitop, utilisant 36,1 fb-1 de données issues des collisions proton-proton à 13 TeV au LHC pendant les années 2015 et 2016. Cette recherche s'est concentrée sur le canal de désintégration semi-leptonique de la paire top-antitop, où l'état final possède une signature comprenant exactement un lepton, des jets hadroniques et de l'énergie transverse manquante. L'estimation du bruit de fond multi-jets a été présentée en détail. Une recherche dans le spectre de masse invariante de la paire top-antitop a été effectuée pour les deux topologies résolue et boostée et la compatibilité des données par rapport aux prédictions du Modèle Standard a été testée. Aucune déviation significative par rapport aux prédictions du Modèle Standard n'a été trouvée et des limites sur les sections efficaces de production de signaux issus des modèles considérés furent mises.Les difficultés rencontrées lors de l'estimation des bruits de fond et du pré-traitement des incertitudes systématiques pour l'analyse à 36,1 fb-1 ont motivé la recherche d'une nouvelle méthode pour l'estimation du bruit de fond globale. L'algorithme Décomposition Fonctionnelle (FD) est une nouvelle méthode permettant de rechercher de nouvelles particules dans un spectre de masse invariante, en séparant la contribution du bruit de fond de celles des contributions résonantes. FD a été testé dans le but de vérifier ses performances sur des pseudo-données des analyses top-antitop et « 4t BSM ». Dans un premier temps, des tests ont été menés sur la propension de FD à créer de faux signaux dans les spectres de masses invariantes. La première version s'est montrée sensible à ce problème. FD a ensuite été amélioré pour réduire la sensibilité à la création de faux signal. Enfin, des études d'injections de signal ont été réalisées et FD a montré des difficultés à modéliser la contribution du signal et à la séparer du bruit de fond pour des largeurs de signal supérieures à 3%. / The Standard Model of particle physics describes three of the four fundamental interactions and all of its predictions have been experimentally confirmed. However, there are still questions that the Standard Model cannot answer. Several theoretical models are being explored and some predict new resonances that would decay into a top-antitop quarks pair that could be observed by the ATLAS detector at the LHC collider.In 2026, the LHC will restart after a significant improvement phase to increase its luminosity. It's in this context that the studies on FATALIC, a chip that has been proposed for the replacement of the front-end electronics of the ATLAS hadronic tile calorimeter, were achieved. The studies showed that FATALIC was able to reconstruct the parameters of an analog signal using three gain channels and a dynamic gain switch. The simulations showed that the expected performance of FATALIC's fast channel was within the required specifications.Then, a search for new particles decaying into a top-antitop quarks pair was presented, using 36.1 fb-1 data from the proton-proton collisions at 13 TeV of the LHC for the years 2015 and 2016. This search concentrate on the semi-leptonic decay channel of the top-antitop quarks pair, where the final state has a signature with exactly one lepton, hadronic jets and missing transverse energy. The estimate of the multi-jets background noise was presented. A search in the top-antitop invariant mass spectrum was performed in the two topology resolved and boosted and the compatibility of the data with respect to the Standard Model predictions was tested. No significant deviation from the Standard Model's predictions was found and limits on benchmark models signal cross sections were set.The difficulties encountered in estimating the background noises and on the profiling of the systematic uncertainties for the 36.1 fb-1 analysis has motivated the search for a new method to perform the global background estimate. The Functional Decomposition (FD) algorithm is a new method to search for new particles in an invariant mass spectrum, separating the contribution of the background noise to those of the resonant contributions. FD has been tested to verify its performance on pseudo-data from the top-antitop and « 4t BSM » analyses. First, tests were conducted to check if FD was creating spurious signal. The first version suffered of this problem and FD was then improved to reduce the amount of spurious signal. Finally, signal injection studies were carried out and FD showed difficulties to model the signal's contribution and to separate it from the background noise for signal with widths greater than 3%.
|
266 |
Elektroslabé procesy v rámci efektivní polní teorie / Elektroslabé procesy v rámci efektivní polní teorieSoukup, Petr January 2009 (has links)
Title: Electroweak processes in the framework of effective field theory Author: Petr Soukup Department: Institute of Particle and Nuclear Physics Supervisor: RNDr. Karol Kampf, Ph.D. Supervisor's e-mail address: kampf@troja.mff.cuni.cz Abstract: In this thesis, we study electroweak processes within the framework of ef- fective field theory employing the approach of effective Lagrangians. We mainly focus on the decay process H → γγ. A complete set of SU(2) × U(1) invariant dimension- six operators is utilized. We present a brief introduction to GWS Standard model and dimension-six effective operators. One-loop Standard model contribution to the process of H → γγ is then evaluated, followed by calculation of tree-level and one-loop level dimension-six operators contribution to the same process. We then present a brief ge- neral summary of renormalization procedure in quantum field theory. Renormalization of performed calculations is implemented, and possible issues that may arise during renormalization of such non-renormalizable theory are also discussed. In the end, we discuss the obtained results, mainly the dependence of H → γγ decay rate on effective theory's free parameters and the scale of the new physics Λ. Focus is made on possible deviations from Standard model results. The results are plotted in charts....
|
267 |
Člověk ve světle vědy / Human in the light of scienceHoudek, Tomáš January 2020 (has links)
The paper thematizes the concept of science in mid- and late thinking of Friedrich Nietzsche in the context of his understanding of scientific cognition of both: human and its world. The study introduces the problem of science and cognition in general in connection with significant motives of Nietzsche's thinking: morality, the revaluation of all values, thinking and living "beyond good and evil", freedom, human body, the superhuman motive, ascetic ideals, and more. Emphasis is put on the problem of veracity in the context of Nietzsche's attitude to idealism. Keywords Human; Superhuman; Nietzsche; Moral Philosophy; Cognition; Truth; Error; Intelect; Body; The Will to Power; Drive and Instinct; Evolution; Idealism; Nihilism; Amor fati; Beyond Good and Evil; Freedom; Ascetic ideals
|
268 |
Det Sublima / The SublimeKarlsson, Klas Richard January 2011 (has links)
Projektet behandlar det sublima i relation till arkitektur; både som händelse, diskurs och institution. Genom ett event som dynamisk förskjuter arkitekturens parametrar och sätter dessa i kontinuerlig förändring i relation subjekt-tid och objekt-rum. Ett gränsöverskridande erfarande av arkitektur. / The project deals with the sublime in relation to architecture; both as event, discourse and institution. Through an event that dynamically shifts architectural parameters and position these in continuous change in relation to subject-time and object-space. A cross-border experience of architecture.
|
269 |
A digital integer-N PLL architecture using a pulse-shrinking TDC for mmWave applications. / En digital integer-N PLL arkitektur baserad på en pulskrypmande TDC för milimetervågsapplikationer.Richter, Simon January 2023 (has links)
With the move of the broadband cellular network towards 5G taking off and the preparatory work on 6G and beyond starting, the need for low-complexity, low-power, and high-performance frequency synthesis using Phase-Locked Loop (PLL)s increases. As we get deeper into the mm-wave frequencies and push towards frequencies in the order of 50-70 GHz design challenges with existing PLL architectures, such as limited technology scaling and limited in-band noise performance become more apparent. Other designs have tried overcoming these problems, for example by using single-bit phase detection at the cost of increased complexity when trying to control the bandwidth, or designing the loop with lower bandwidth to suppress in-band noise at the cost of requiring a lower noise and thus more power hungry oscillator. This thesis proposes a new Phase-locked loop architecture implemented in a 22nm node to combat these issues, utilizing a Pulse-Shrinking Time-To-Digital Converter (PS-TDC) offering sub-pico-second resolution with minimal power consumption in lock. The results found in this thesis have shown the viability of such a design, offering good in-band performance, allowing for wide bandwidth, and the use of a cheaper low-power Digital-Controlled Oscillator (DCO). The PS-TDC architecture combined with control logic implemented in this project can drastically decrease power consumption in lock while being able to compensate for process variations to optimize jitter performance. Additionally, by utilizing a Phase-Frequency Detector (PFD) and gear-shifting logic it has been shown that robust and fast locking can be achieved. / Med övergången till 5G i mobila bredbandsnätverk och förberedelserna för 6G på gång ökar behovet av lågkomplexa, lågeffekts- och högpresterande frekvenssyntes. När vi beger oss djupare in i millimetervågsfrekvenserna och strävar efter frekvenser uppemot 50-70 GHz blir designutmaningar med befintliga faslåsta loopar, såsom begränsad teknologiskalning och dålig prestanda för inband-brus, alltmer tydliga. Andra designer har försökt att övervinna dessa problem genom att till exempel använda enbitars fasdetektion till priset av ökad komplexitet vid styrning av systemets bandbredd, eller genom att designa loopen med lägre bandbredd för att vidare dämpa inband-brus, vilket kommer till priset av en oscillator med lägre brus och därmed högre effektförbrukning. Denna avhandling föreslår en ny arkitektur för faslåsta loopar för att överkomma dessa problem genom att använda en pulskrympande tids-till-digital omvandlare som erbjuder sub-pikosekunds upplösning med minimal effektförbrukning när frekvensen är låst. Resultaten som presenteras i denna avhandling har visat att en sådan design är möjlig, med god in-band prestanda, möjlighet till hög bandbredd och därmed användning av en billigare lågeffekt DCO. Den pulsskalande TDC-arkitekturen i kombination med kontrolllogik implementerad i detta projekt kan dramatiskt minska effektförbrukningen när frekvensen är låst, samtidigt som den kan kompensera för processvariationer för att optimera jitterprestanda. Sist har det visats att en robust och snabb låsning av frekvensen kan uppnås genom att använda en PFD.
|
270 |
Adaptive and Robust Multi-Gigabit Techniques Based MmWave Massive MU-MIMO Beamforming For 5G Wireless and Mobile Communications Systems. A Road Map for Simple and Robust Beamforming Scheme and Algorithms Based Wideband MmWave Massive MU-MIMO for 5G Wireless and Mobile Communications SystemsAlabdullah, Ali AbdulMohsin S. January 2021 (has links)
Over recent years, the research and studies have focused on innovative solutions in various aspects
and phases related to the high demands on data rate and energy for fifth-generation and beyond
(B5G). This thesis aims to improve the energy efficiency, error rates, low-resolution
ADCs/DACs, antenna array structures and sum-rate performances of a single cell downlink
broadband millimetre-wave (mmWave) systems with orthogonal frequency division multiplexing
(OFDM) modulation and deploying multi-user massive multiple inputs multiple outputs (MU mMIMO) by applying robust beamforming techniques and detection algorithms that support
multiple streams per user (UE) in various environments and scenarios to achieve low complexity
system design with reliable performance and significant improvement in users perceived quality
of service (QoS).
The performance of the four 5G candidate mmWave frequencies, 28 GHz, 39 GHz, 60 GHz, and
73 GHz, are investigated for indoor/outdoor propagation scenarios, including path loss models
and multipath delay spread values. Results are compared to confirm that the received power and
delay spread is decreased with increasing frequency. The results were also validated with the
measurement findings for 60 GHz.
Then several proposed design models of beamforming are studied and implemented modified
algorithms of Hybrid Beamforming (HBF) approaches in indoor/outdoor scenarios over large
scale fading wideband mmWave /Raleigh channels. Firstly, three beamforming based diagonalize
the Equivalent Virtual Channel Matrix (EVCM) schemes with the optimal linear combining
methods are presented to overcoming the self-interference problems in Quasi-Orthogonal-Space
Time Block Code (QO-STBC) systems over narrowband mmWave Single-User mMIMO (SU mMIMO). The evaluated results show that the proposed beamforming based- Single Value
Decomposition (SVD) outperforms the conventional beamforming and standard QO-STBC
techniques in terms of BER and spectrum efficiency.
Next, the proposed HBF algorithm approaches with the fully/ partially connected structures are
developed and applied for sum-rate and symbol error rate (SER) performance maximization MU mMIMO-OFDM system, including HBF based on block diagonalization (BD) method Constraint/Unconstraint RF Power, Codebook, Kalman schemes. In addition, the modified near optimal linear HBF-Zero Forcing (HBF-ZF) and HBF-Minimum Mean Square Error (HBF MMSE) schemes, considering both fully-connected and partially-connected structures.
Finally, Simulation results using MATLAB platform, demonstrate that the proposed HBF based codebook and most likely HBF based-unconstraint RF power algorithms achieve significant
performance gains in terms SER and sum-rate efficiency as well as show high immunity against
the deformities and disturbances in the system compared with other HBF algorithm schemes. / Ministry of Higher Education and Scientific Research, the Republic of Iraq
|
Page generated in 0.035 seconds