Spelling suggestions: "subject:"largescale"" "subject:"largerscale""
941 |
L'univers aux grandes échelles : études de l'homogénéité cosmique et de l'énergie noire à partir des relevés de quasars BOSS et eBOSS / The universe on large scales : studies of cosmic homogeneity and dark energy with the BOSS et eBOSS quasar surveysLaurent, Pierre 14 September 2016 (has links)
Ce travail de thèse se sépare en deux volets. Le premier volet concerne l'étude de l'homogénéité de l'univers, et le second une mesure de l'échelle BAO, qui constitue une règle standard permettant de mesurer l'évolution du taux d'expansion de l'univers. Ces deux analyses reposent sur l'étude de la structuration (ou clustering) des quasars des relevés BOSS et eBOSS, qui couvrent la gamme en redshift 0,9 < z < 2,8. Les mesures des observables caractérisant la structuration de l'univers aux grandes échelles sont très sensibles aux effets systématiques, nous avons donc étudiés ces effets en profondeur. Nous avons mis en évidence que les sélections de cibles quasars BOSS et eBOSS ne sont pas parfaitement homogènes, et corrigé cet effet. Au final, la mesure de la fonction de corrélation des quasars nous a permis de mesurer le biais des quasars sur la gamme en redshift 0,9 < z < 2,8. Nous obtenons la mesure actuelle la plus précise du biais, b = 3,85 ± 0,11 dans la gamme 2,2 < z < 2,8 pour le relevé BOSS, et b = 2,44 ± 0,04 dans la gamme 0,9 < z < 2,2 pour le relevé eBOSS. Le Principe Cosmologique stipule que notre univers est isotrope et homogène à grande échelle. Il s'agit d'un des postulats de base de la cosmologie moderne. En étudiant la structuration à très grande échelle des quasars, nous avons prouvé l'isotropie spatiale de l'univers dans la gamme 0,9 < z < 2,8, indépendamment de toute hypothèse et cosmologie fiducielle. L'isotropie spatiale stipule que l'univers est isotrope dans chaque couche de redshift. En la combinant au principe de Copernic, qui stipule que nous ne nous situons pas à une position particulière dans l'univers, permet de prouver que notre univers est homogène aux grandes échelles. Nous avons effectué une mesure de la dimension de corrélation fractale de l'univers, D₂(r), en utilisant un nouvel estimateur, inspiré de l'estimateur de Landy-Szalay pour la fonction de corrélation. En corrigeant notre estimateur du biais des quasars, nous avons mesuré (3 - D₂(r)) = (6,0 ± 2,1) x 10⁻⁵ entre 250 h⁻¹ Mpc et 1200 h⁻¹ Mpc pour le relevé eBOSS, dans la gamme 0,9 < z < 2,2. Pour le relevé BOSS, nous obtenons (3 - D₂(r)) = (3,9 ± 2,1) x 10⁻⁵, dans la gamme 2,2 < z < 2,8. De plus, nous montrons que le modèle Lambda-CDM décrit très bien la transition d'un régime structuré vers un régime homogène. D’autre part, nous avons mesuré la position du pic BAO dans les fonctions de corrélation des quasars BOSS et eBOSS, détecté à 2,5 sigma dans les deux relevés. Si nous mesurons le paramètre α, qui correspond au rapport entre la position du pic mesuré et la position prédite par une cosmologie fiducielle (en utilisant les paramètres Planck 2013), nous mesurons α = 1,074 pour le relevé BOSS, et α = 1,009 pour le relevé eBOSS. Ces mesures, combinées uniquement à la mesure locale de H₀, nous permettent de contraindre l'espace des paramètres de modèles au-delà du Lambda-CDM. / This work consists in two parts. The first one is a study of cosmic homogeneity, and the second one a measurement of the BAO scale, which provides a standard ruler that allows for a direct measurement of the expansion rate of the universe. These two analyses rely on the study of quasar clustering in the BOSS and eBOSS quasar samples, which cover the redshift range 0.9 < z < 2.8. On large scales, the measurement of statistical observables is very sensitive to systematic effects, so we deeply studied these effects. We found evidences that the target selections of BOSS and eBOSS quasars are not perfectly homogeneous, and we have corrected this effect. The measurement of the quasar correlation function provides the quasar bias in the redshift range 0.9 < z < 2.8. We obtain the most precise measurement of the quasar bias at high redshift, b = 3.85 ± 0.11, in the range 2.2 < z < 2.8 for the BOSS survey, and b = 2.44 ± 0.04 in the range 0.9 < z < 2.2 for the eBOSS survey. The Cosmological Principle states that the universe is homogeneous and isotropic on large scales. It is one of the basic assumptions of modern cosmology. By studying quasar clustering on large scales, we have proved ''spatial isotropy'', i.e. the fact that the universe is isotropic in each redshift bins. This has been done in the range 0.9 < z < 2.8 without any assumption or fiducial cosmology. If we combine spatial isotropy with the Copernican Principle, which states that we do not occupy a peculiar place in the universe, it is proved that the universe is homogeneous on large scales. We provide a measurement of the fractal correlation dimension of the universe, D₂(r), which is 3 for an homogeneous distribution, and we used a new estimator inspired from the Landy-Szalay estimator for the correlation function. If we correct our measurement for quasar bias, we obtain (3 - D₂(r)) = (6.0 ± 2.1) x 10⁻⁵ between 250 h⁻¹ Mpc and 1200 h⁻¹ Mpc for eBOSS, in the range 0.9 < z < 2.2. For BOSS, we obtain (3 - D₂(r)) = (3.9 ± 2.1) x 10⁻⁵, in the range 2.2 < z < 2.8. Moreover, we have shown that the Lambda-CDM model provide a very nice description of the transition from structures to homogeneity. We have also measured the position of the BAO peak in the BOSS and eBOSS quasar correlation functions, which yield a 2,5 sigma detection in both surveys. If we measure the α parameter, which corresponds to the ratio of the measured position of the peak to the predicted position in a fiducial cosmology (here Planck 2013), we measure α = 1.074 for BOSS, and α = 1.009 for eBOSS. These measurements, combined only with the local measurement of H₀, allows for constraints in parameter space for models beyond Lambda-CDM.
|
942 |
Mesure de l'échelle des oscillations acoustiques de baryons dans la fonction de corrélation des forêts Lyman-α avec la distribution des quasars observés dans le relevé SDSS / Mesure of the scale of bayonic acoustic oscillations in the correlation function of Lyman-α forest with the quasar distribution observed in the SDSS surveyDu Mas des Bourboux, Hélion 08 September 2017 (has links)
La propagation des ondes acoustiques dans le plasma primordial a laissé son empreinte sous la forme d'un pic dans la fonction de corrélation à deux points de la densité de matière. Ce pic d'oscillations acoustiques de baryons (BAO) constitue une échelle standard permettant de déterminer certains paramètres des différents modèles cosmologiques.Dans ce manuscrit de thèse, nous présentons une mise à jour de la mesure de BAO à un redshift z=2.40, à l'aide de la fonction de corrélation croisée entre deux traceurs des fluctuations primordiales de densité de matière: les quasars de SDSS-III (BOSS) et leurs fluctuations d'absorption du flux des forêts Lyman-α. Ces fluctuations tracent la distribution d'hydrogène neutre dans le milieu intergalactique (IGM).Cette étude constitue le premier développement d'un ajustement entièrement physique de la fonction de corrélation croisée; il prend notamment en compte la physique des quasars et la présence d'éléments plus lourds que l'hydrogène dans l'IGM. Nous y présentons également les premières simulations de notre analyse. Celles-ci nous permettent de valider l'ensemble de la procédure de mesure de l'échelle BAO.Cette étude mesure la distance de Hubble et la distance de diamètre angulaire avec respectivement une précision de 2% et 3% (intervalle à 1 σ). Nous combinons nos résultats avec d'autres mesures de BAO à des redshifts plus faibles et trouvons la densité de matière noire et d'énergie noire dans le cadre de deux différents modèles cosmologiques: ΛCDM et oΛCDM. / The acoustic wave propagation in the primordial plasma left its imprint in the two-point correlation function of the matter density field. This baryonic acoustic oscillation (BAO) peak builds up a standard ladder allowing us to infer some parameters of the different cosmological models.In this thesis manuscript we present an update of the BAO measurement at a redshift z=2.40, from the cross-correlation function between two tracers of the primordial matter density fluctuations: quasars of SDSS-III (BOSS) and their Lyman-α-forest absorption fluctuations. These fluctuations trace the neutral hydrogen distribution in the intergalactic medium (IGM).This study gives the first developpment of the full physical fit of the cross-correlation. Among other effects, it takes into account quasar physics and the distribution of IGM elements heavier than hydrogen. We also present the first simulations of our analysis. They allow us to validate the overall data analysis leading to the BAO measurement.This study measures the Hubble distance and the angular diameter distance at the 2%$ and 3%$ precision level respectivelly (1 σ interval). We combine our results with other BAO measurements at lower redshifts and find the dark matter density and dark energy density in the framework of two different cosmological models: ΛCDM et oΛCDM.
|
943 |
Targeting the transposable elements of the genome to enable large-scale genome editing and bio-containment technologies. / Le ciblage des éléments transposables du génome humain pour développer des technologies permettant son remaniement à grande échelle et des technologies de bio-confinement.Castanon velasco, Oscar 14 March 2019 (has links)
Les nucléases programmables et site-spécifiques comme CRISPR-Cas9 sont des signes avant-coureurs d’une nouvelle révolution en génie génétique et portent en germe un espoir de modification radicale de la santé humaine. Le « multiplexing » ou la capacité d’introduire plusieurs modifications simultanées dans le génome sera particulièrement utile en recherche tant fondamentale qu’appliquée. Ce nouvel outil sera susceptible de sonder les fonctions physiopathologiques de circuits génétiques complexes et de développer de meilleures thérapies cellulaires ou traitements antiviraux. En repoussant les limites du génie génétique, il sera possible d’envisager la réécriture et la conception de génomes mammifères. Le développement de notre capacité à modifier profondément le génome pourrait permettre la création de cellules résistantes aux cancers, aux virus ou même au vieillissement ; le développement de cellules ou tissus transplantables compatibles entre donneurs et receveurs ; et pourrait même rendre possible la résurrection d’espèces animales éteintes. Dans ce projet de recherche doctoral, nous présentons l’état de l’art du génie génétique « multiplex », les limites actuelles et les perspectives d’améliorations. Nous tirons profit de ces connaissances ainsi que de l’abondance des éléments transposables de notre ADN afin de construire une plateforme d’optimisation et de développement de nouveaux outils de génie génétique qui autorisent l’édition génomique à grande échelle. Nous démontrons que ces technologies permettent la production de modifications à l’échelle du génome allant jusqu’à 3 ordres de grandeur supplémentaires que précédemment, ouvrant la voie au développement de la réécriture des génomes de mammifères. En outre, l’observation de la toxicité engendrée par la multitude de coupures double-brins dans le génome nous a amenés à développer un bio-interrupteur susceptible d’éviter les effets secondaires des thérapies cellulaires actuelles ou futures. Enfin, en conclusion, nous exposons les potentielles inquiétudes et menaces qu’apporte le domaine génie génétiques et apportons des pistes de réflexions pour diminuer les risques identifiés. / Programmable and site-specific nucleases such as CRISPR-Cas9 have started a genome editing revolution, holding hopes to transform human health. Multiplexing or the ability to simultaneously introduce many distinct modifications in the genome will be required for basic and applied research. It will help to probe the physio-pathological functions of complex genetic circuits and to develop improved cell therapies or anti-viral treatments. By pushing the boundaries of genome engineering, we may reach a point where writing whole mammalian genomes will be possible. Such a feat may lead to the generation of virus-, cancer- or aging- free cell lines, universal donor cell therapies or may even open the way to de-extinction. In this doctoral research project, I outline the current state-of-the-art of multiplexed genome editing, the current limits and where such technologies could be headed in the future. We leveraged this knowledge as well as the abundant transposable elements present in our DNA to build an optimization pipeline and develop a new set of tools that enable large-scale genome editing. We achieved a high level of genome modifications up to three orders of magnitude greater than previously recorded, therefore paving the way to mammalian genome writing. In addition, through the observation of the cytotoxicity generated by multiple double-strand breaks within the genome, we developed a bio-safety switch that could potentially prevent the adverse effects of current and future cell therapies. Finally, I lay out the potential concerns and threats that such an advance in genome editing technology may be bringing and point out possible solutions to mitigate the risks.
|
944 |
Sputtering of High Quality Layered MoS2 filmsAbid Al Shaybany, Sari January 2020 (has links)
We have deposited bulk, monolayer and few-layers as well as large-scale 2D layered MoS2 thin films by pulsed DC magnetron sputtering from an MoS2 target. MoS2 has gained great attention lately, together with other layered Transition Metal Dichalcogenides (TMDCs), for its unique optical and electrical properties with thickness-dependent bandgap. MoS2 also transitions from an indirect to a direct bandgap when thinned down to monolayer. This is intriguing in the fabrication of novel solar cells and photodetectors. Sputter-deposition has the advantage of producing large-scale, high-quality films, which is paramount for layered MoS2 to be applicable on an industrial level. The quality in terms of crystallinity and c⊥-texture of sputtered bulk MoS2 was evaluated as a function of several deposition process parameters: process pressure, substrate temperature and H2S-to-Ar ratio. X-ray Diffraction (XRD) results revealed that the high substrate temperature of 700 °C together with reactive H2S process gas improved the quality regardless of pressure. However, the quality was slightly improved further with increasing pressure up to 50 mTorr. We also found that the quality improved with increasing temperature up to 700 °C using pure Ar as the process gas. Rutherford Backscattering Spectrometry (RBS) analysis showed that with the addition of H2S the stoichiometry of MoSx improved from MoS1.78 using pure Ar to fully stoichiometric MoS2.01 at 40% H2S in the H2S/Ar mixture. Cross-sectional Transmission Electron Microscopy (TEM) imaging revealed the high-quality 2D layered structure of the MoS2 films and a maximum thickness of 5 nm of c⊥-growth MoS2 before the onset of the undesirable c∥-growth. These results provide a solution with respect to the ongoing challenge of obtaining high quality and good stoichiometry of sputtered TMDC films at elevated temperatures. Formation of monolayer and few-layers MoS2 was confirmed by Raman and Photoluminescence (PL) spectroscopy. The peak separation of the E12g and A1g Raman-active modes for MoS2 monolayer was measured to 19.3 cm-1 on SiO2/Si, increases substantially in the transition to bilayer MoS2 and exhibits bulk values from four layers MoS2 and above. This result serves as a good indicator of monolayer as well as few-layers MoS2 formation. The monolayer film exhibits a strong photoluminescence peak at 1.88 eV owing to its direct optical bandgap, as compared to the indirect one of bilayer and thicker films. X-ray Photoelectron Spectroscopy (XPS) spectra of the monolayer MoSx film indicate successful sulfurization of the molybdenum atoms and absence of residual sulfur. XPS also showed ideal stoichiometric MoS2.03 ± 0.03 of the monolayer film. Furthermore, a uniform MoS2 monolayer was successfully grown on a 4" SiO2/Si wafer, demonstrating the large-scale uniformity that can be achieved by sputter-deposition, making it highly applicable on an industrial level.
|
945 |
Dynamika otopných ploch / Dynamics of heating surfaces behaviorOravec, Jakub January 2019 (has links)
The diploma thesis is focused on the research of dynamics of selected heating surfaces behavior. The aim of the thesis is to determine the dynamics of heating and cooling and to determine the effect of these characteristics on energy consumption of the building. The project part deals with the design of a heating solution for a residential building in three variants. An Energetic simulation is made for the designed variants, that compares the consumption of thermal energy during one year. The next simulation research the dynamics of selected large-scale heating surfaces. For each construction, nonstationary models of heating up and cooling were made, which are compared in terms of the thermal inertia.
|
946 |
Multiphysics and Large-Scale Modeling and Simulation Methods for Advanced Integrated Circuit DesignShuzhan Sun (11564611) 22 November 2021 (has links)
<div>The design of advanced integrated circuits (ICs) and systems calls for multiphysics and large-scale modeling and simulation methods. On the one hand, novel devices and materials are emerging in next-generation IC technology, which requires multiphysics modeling and simulation. On the other hand, the ever-increasing complexity of ICs requires more efficient numerical solvers.</div><div><br></div><div>In this work, we propose a multiphysics modeling and simulation algorithm to co-simulate Maxwell's equations, dispersion relation of materials, and Boltzmann equation to characterize emerging new devices in IC technology such as Cu-Graphene (Cu-G) hybrid nano-interconnects. We also develop an unconditionally stable time marching scheme to remove the dependence of time step on space step for an efficient simulation of the multiscaled and multiphysics system. Extensive numerical experiments and comparisons with measurements have validated the accuracy and efficiency of the proposed algorithm. Compared to simplified steady-state-models based analysis, a significant difference is observed when the frequency is high or/and the dimension of the Cu-G structure is small, which necessitates our proposed multiphysics modeling and simulation for the design of advanced Cu-G interconnects. </div><div><br></div><div>To address the large-scale simulation challenge, we develop a new split-field domain-decomposition algorithm amenable for parallelization for solving Maxwell’s equations, which minimizes the communication between subdomains, while having a fast convergence of the global solution. Meanwhile, the algorithm is unconditionally stable in time domain. In this algorithm, unlike prevailing domain decomposition methods that treat the interface unknown as a whole and let it be shared across subdomains, we partition the interface unknown into multiple components, and solve each of them from one subdomain. In this way, we transform the original coupled system to fully decoupled subsystems to solve. Only one addition (communication) of the interface unknown needs to be performed after the computation in each subdomain is finished at each time step. More importantly, the algorithm has a fast convergence and permits the use of a large time step irrespective of space step. Numerical experiments on large-scale on-chip and package layout analysis have demonstrated the capability of the new domain decomposition algorithm. </div><div><br></div><div>To tackle the challenge of efficient simulation of irregular structures, in the last part of the thesis, we develop a method for the stability analysis of unsymmetrical numerical systems in time domain. An unsymmetrical system is traditionally avoided in numerical formulation since a traditional explicit simulation is absolutely unstable, and how to control the stability is unknown. However, an unsymmetrical system is frequently encountered in modeling and simulating of unstructured meshes and nonreciprocal electromagnetic and circuit devices. In our method, we reduce stability analysis of a large system into the analysis of dissembled single element, therefore provides a feasible way to control the stability of large-scale systems regardless of whether the system is symmetrical or unsymmetrical. We then apply the proposed method to prove and control the stability of an unsymmetrical matrix-free method that solves Maxwell’s equations in general unstructured meshes while not requiring a matrix solution.<br></div><div><br></div>
|
947 |
Commande dynamique de robots déformables basée sur un modèle numérique / Model-based dynamic control of soft robotsThieffry, Maxime 16 October 2019 (has links)
Cette thèse s’intéresse à la modélisation et à la commande de robots déformables, c’est à dire de robots dont le mouvement se fait par déformation. Nous nous intéressons à la conception de lois de contrôle en boucle fermée répondant aux besoins spécifiques du contrôle dynamique de robots déformables, sans restrictions fortes sur leur géométrie. La résolution de ce défi soulève des questions théoriques qui nous amènent au deuxième objectif de cette thèse: développer de nouvelles stratégies pour étudier les systèmes de grandes dimensions. Ce manuscrit couvre l’ensemble du développement des lois de commandes, de l’étape de modélisation à la validation expérimentale. Outre les études théoriques, différentes plateformes expérimentales sont utilisées pour valider les résultats. Des robots déformables actionnés par câble et par pression sont utilisés pour tester les algorithmes de contrôle. A travers ces différentes plateformes, nous montrons que la méthode peut gérer différents types d’actionnement, différentes géométries et propriétés mécaniques. Cela souligne l’un des intérêts de la méthode, sa généricité. D’un point de vue théorique, les systèmes dynamiques à grande dimensions ainsi que les algorithmes de réduction de modèle sont étudiés. En effet, modéliser des structures déformables implique de résoudre des équations issues de la mécanique des milieux continus, qui sont résolues à l’aide de la méthode des éléments finis (FEM). Ceci fournit un modèle précis des robots mais nécessite de discrétiser la structure en un maillage composé de milliers d’éléments, donnant lieu à des systèmes dynamiques de grandes dimensions. Cela conduit à travailler avec des modèles de grandes dimensions, qui ne conviennent pas à la conception d’algorithmes de contrôle. Une première partie est consacrée à l’étude du modèle dynamique à grande dimension et de son contrôle, sans recourir à la réduction de modèle. Nous présentons un moyen de contrôler le système à grande dimension en utilisant la connaissance d’une fonction de Lyapunov en boucle ouverte. Ensuite, nous présentons des algorithmes de réduction de modèle afin de concevoir des contrôleurs de dimension réduite et des observateurs capables de piloter ces robots déformables. Les lois de contrôle validées sont basées sur des modèles linéaires, il s’agit d’une limitation connue de ce travail car elle contraint l’espace de travail du robot. Ce manuscrit se termine par une discussion qui offre un moyen d’étendre les résultats aux modèles non linéaires. L’idée est de linéariser le modèle non linéaire à grande échelle autour de plusieurs points de fonctionnement et d’interpoler ces points pour couvrir un espace de travail plus large. / This thesis focuses on the design of closed-loop control laws for the specific needs of dynamic control of soft robots, without being too restrictive regarding the robots geometry. It covers the entire development of the controller, from the modeling step to the practical experimental validation. In addition to the theoretical studies, different experimental setups are used to illustrate the results. A cable-driven soft robot and a pressurized soft arm are used to test the control algorithms. Through these different setups, we show that the method can handle different types of actuation, different geometries and mechanical properties. This emphasizes one of the interests of the method, its genericity. From a theoretical point a view, large-scale dynamical systems along with model reduction algorithms are studied. Indeed, modeling soft structures implies solving equations coming from continuum mechanics using the Finite Element Method (FEM). This provides an accurate model of the robots but it requires to discretize the structure into a mesh composed of thousands of elements, yielding to large-scale dynamical systems. This leads to work with models of large dimensions, that are not suitable to design control algorithms. A first part is dedicated to the study of the large-scale dynamic model and its control, without using model reduction. We present a way to control the large-scale system using the knowledge of an open-loop Lyapunov function. Then, this work investigates model reduction algorithms to design low order controllers and observers to drive soft robots. The validated control laws are based on linear models. This is a known limitation of this work as it constrains the guaranteed domain of the controller. This manuscript ends with a discussion that offers a way to extend the results towards nonlinear models. The idea is to linearize the large-scale nonlinear model around several operating points and interpolate between these points to cover a wider workspace.
|
948 |
Passage à l'échelle pour la visualisation interactive exploratoire de données : approches par abstraction et par déformation spatiale / Addressing scaling challenges in interactive exploratory visualization with abstraction and spatial distortionRicher, Gaëlle 26 November 2019 (has links)
La visualisation interactive est un outil essentiel pour l'exploration, la compréhension et l'analyse de données. L'exploration interactive efficace de jeux de données grands ou complexes présente cependant deux difficultés fondamentales. La première est visuelle et concerne les limitations de la perception et cognition humaine, ainsi que celles des écrans. La seconde est computationnelle et concerne les limitations de capacité mémoire ou de traitement des machines standards. Dans cette thèse, nous nous intéressons aux techniques de passage à l'échelle relativement à ces deux difficultés, pour plusieurs contextes d'application.Pour le passage à l'échelle visuelle, nous présentons une approche versatile de mise en évidence de sous-ensembles d'éléments par déformation spatiale appliquée aux vues multiples et une représentation abstraite et multi-/échelle de coordonnées parallèles. Sur les vues multiples, la déformation spatiale vise à remédier à la diminution de l'efficacité de la surbrillance lorsque les éléments graphiques sont de taille réduite. Sur les coordonnées parallèles, l'abstraction multi-échelle consiste à simplifier la représentation tout en permettant d'accéder interactivement au détail des données, en les pré-agrégeant à plusieurs niveaux de détail.Pour le passage à l'échelle computationnelle, nous étudions des approches de pré-calcul et de calcul à la volée sur des infrastructures distribuées permettant l'exploration de jeux de données de plus d'un milliard d'éléments en temps interactif. Nous présentons un système pour l'exploration de données multi-dimensionnelles dont les interactions et l'abstraction respectent un budget en nombre d'éléments graphiques qui, en retour, fournit une borne théorique sur les latences d'interactions dues au transfert réseau entre client et serveur. Avec le même objectif, nous comparons des stratégies de réduction de données géométrique pour la reconstruction de cartes de densité d'ensembles de points. / Interactive visualization is helpful for exploring, understanding, and analyzing data. However, increasingly large and complex data challenges the efficiency of visualization systems, both visually and computationally. The visual challenge stems from human perceptual and cognitive limitations as well as screen space limitations while the computational challenge stems from the processing and memory limitations of standard computers.In this thesis, we present techniques addressing the two scalability issues for several interactive visualization applications.To address visual scalability requirements, we present a versatile spatial-distortion approach for linked emphasis on multiple views and an abstract and multi-scale representation based on parallel coordinates. Spatial distortion aims at alleviating the weakened emphasis effect of highlighting when applied to small-sized visual elements. Multiscale abstraction simplifies the representation while providing detail on demand by pre-aggregating data at several levels of detail.To address computational scalability requirements and scale data processing to billions of items in interactive times, we use pre-computation and real-time computation on a remote distributed infrastructure. We present a system for multi-/dimensional data exploration in which the interactions and abstract representation comply with a visual item budget and in return provides a guarantee on network-related interaction latencies. With the same goal, we compared several geometric reduction strategies for the reconstruction of density maps of large-scale point sets.
|
949 |
On the Parameter Selection Problem in the Newton-ADI Iteration for Large Scale Riccati EquationsBenner, Peter, Mena, Hermann, Saak, Jens 26 November 2007 (has links)
The numerical treatment of linear-quadratic regulator problems for
parabolic partial differential equations (PDEs) on infinite time horizons
requires the solution of large scale algebraic Riccati equations (ARE).
The Newton-ADI iteration is an efficient numerical method for this task.
It includes the solution of a Lyapunov equation by the alternating directions
implicit (ADI) algorithm in each iteration step. On finite time
intervals the solution of a large scale differential Riccati equation is required.
This can be solved by a backward differentiation formula (BDF)
method, which needs to solve an ARE in each time step.
Here, we study the selection of shift parameters for the ADI method.
This leads to a rational min-max-problem which has been considered by
many authors. Since knowledge about the complete complex spectrum
is crucial for computing the optimal solution, this is infeasible for the
large scale systems arising from finite element discretization of PDEs.
Therefore several alternatives for computing suboptimal parameters are
discussed and compared for numerical examples.
|
950 |
Index Modulation Techniques for Energy-efficient Transmission in Large-scale MIMO SystemsSefunc, Merve 16 March 2020 (has links)
This thesis exploits index modulation techniques to design energy- and spectrum-efficient system models to operate in future wireless networks. In this respect, index modulation techniques are studied considering two different media: mapping the information onto the frequency indices of multicarrier systems, and onto the antenna array indices of a platform that comprises multiple antennas.
The index modulation techniques in wideband communication scenarios considering orthogonal and generalized frequency division multiplexing systems are studied first. Single cell multiuser networks are considered while developing the system models that exploit the index modulation on the subcarriers of the multicarrier systems. Instead of actively modulating all the subcarriers, a subset is selected according to the index modulation bits. As a result, there are subcarriers that remain idle during the data transmission phase and the activation pattern of the subcarriers convey additional information.
The transceivers for the orthogonal and generalized frequency division multiplexing systems with index modulation are both designed considering the uplink and downlink transmission phases with a linear combiner and precoder in order to reduce the system complexity. In the developed system models, channel state information is required only at the base station. The linear combiner is designed adopting minimum mean square error method to mitigate the inter-user-interference. The proposed system models offer a flexible design as the parameters are independent of each other. The parameters can be adjusted to design the system in favor of the energy efficiency, spectrum efficiency, peak-to-average power ratio, or error performance.
Then, the index modulation techniques are studied for large-scale multiple-input multiple-output systems that operate in millimeter wave bands. In order to overcome the drawbacks of transmission in millimeter wave frequencies, channel properties should be taken in to account while envisaging the wireless communication network. The large-scale multiple-input multiple-output systems increase the degrees of freedom in the spatial domain. This feature can be exploited to focus the transmit power directly onto the intended receiver terminal to cope with the severe path-loss. However, scaling up the number of hardware elements results in excessive power consumption. Hybrid architectures provide a remedy by shifting a part of the signal processing to the analog domain. In this way, the number of bulky and high power consuming hardware elements can be reduced. However, there will be a performance degradation as a consequence of renouncing the fully digital signal processing. Index modulation techniques can be combined with the hybrid system architecture to compensate the loss in spectrum efficiency to further increase the data rates.
A user terminal architecture is designed that employs analog beamforming together with spatial modulation where a part of the information bits is mapped onto the indices of the antenna arrays. The system is comprised a switching stage that allocates the user terminal antennas on the phase shifter groups to minimize the spatial correlation, and a phase shifting stage that maximizes the beamforming gain to combat the path-loss. A computationally efficient optimization algorithm is developed to configure the system. The flexibility of the architecture enables optimization of the hybrid transceiver at any signal-to-noise ratio values.
A base station is designed in which hybrid beamforming together with spatial modulation is employed. The analog beamformer is designed to point the transmit beam only in the direction of the intended user terminal to mitigate leakage of the transmit power to other directions. The analog beamformer to transmit the signal is chosen based on the spatial modulation bits. The digital precoder is designed to eliminate the inter-user-interference by exploiting the zero-forcing method. The base station computes the hybrid beamformers and the digital combiners, and only feeds back the digital combiners of each antenna array-user pair to the related user terminals. Thus, a low complexity user architecture is sufficient to achieve a higher performance. The developed optimization framework for the energy efficiency jointly optimizes the number of served users and the total transmit power by utilizing the derived upper bound of the achievable rate. The proposed transceiver architectures provide a more energy-efficient system model compared to the hybrid systems in which the spatial modulation technique is not exploited.
This thesis develops low-complexity system models that operate in narrowband and wideband channel environments to meet the energy and spectrum efficiency demands of future wireless networks. It is corroborated in the thesis that adopting index modulation techniques both in the systems improves the system performance in various aspects.:1 Introduction 1
1.1 Motivation 1
1.2 Overview and Contribution 2
1.3 Outline 9
2 Preliminaries and Fundamentals 13
2.1 Multicarrier Systems 13
2.2 Large-scale Multiple Input Multiple Output Systems 17
2.3 Index Modulation Techniques 19
2.4 Single Cell Multiuser Networks 22
3 Multicarrier Systems with Index Modulation 27
3.1 Orthogonal Frequency Division Multiplexing 28
3.2 Generalized Frequency Division Multiplexing 40
3.3 Summary 52
4 Hybrid Beamforming with Spatial Modulation 55
4.1 Uplink Transmission 56
4.2 Downlink Transmission 74
4.3 Summary 106
5 Conclusion and Outlook 109
5.1 Conclusion 109
5.2 Outlook 111
A Quantization Error Derivations 113
B On the Achievable Rate of Gaussian Mixtures 115
B.1 The Conditional Density Function 115
B.2 Tight Bounds on the Differential Entropy 116
B.3 A Bound on the Achievable Rate 118
C Multiuser MIMO Downlink without Spatial Modulation 121
Bibliography
|
Page generated in 0.036 seconds