11 |
Sistemas Clustered-OFDM SISO e MIMO para power line communicationColen, Guilherme Ribeiro 06 September 2012 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-04-20T15:37:28Z
No. of bitstreams: 1
guilhermeribeirocolen.pdf: 1980646 bytes, checksum: 55067533570f6bd2d5a1ab3288db464d (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-04-24T16:49:37Z (GMT) No. of bitstreams: 1
guilhermeribeirocolen.pdf: 1980646 bytes, checksum: 55067533570f6bd2d5a1ab3288db464d (MD5) / Made available in DSpace on 2017-04-24T16:49:37Z (GMT). No. of bitstreams: 1
guilhermeribeirocolen.pdf: 1980646 bytes, checksum: 55067533570f6bd2d5a1ab3288db464d (MD5)
Previous issue date: 2012-09-06 / CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico / Esta dissertação tem como objetivo investigar, propor e analisar esquemas para reduzir a complexidade computacional de algoritmos implementados na camada física de transceptores para comunicação de dados via rede elétrica - power line communication (PLC) que são baseados em multiplexação por divisão de frequência ortogonal - orthogonal frequency division multiplexing (OFDM). Inicialmente, o Clustered-OFDM é investigado e analisado com o intuito de reduzir a complexidade computacional dos transceptores PLC. Além disto, uma relação entre complexidade computacional e desempenho é demonstrada para Clustered-OFDM e múltiplo acesso por divisão de frequência ortogonal - orthogonal frequency division multiple access (OFDMA). Os resultados computacionais quantificam a relação entre complexidade computacional e redução da capacidade do canal para o Clustered-OFDM em comparação com o OFDMA. Em seguida, é proposto e analisado um esquema Clustered-OFDM para comunicação com múltiplas entradas e múltiplas saídas - multiple-input and multiple-output (MIMO) 2×2, denominado MIMO-Clustered-OFDM, que tem como base um código de bloco espacial e temporal. Os resultados de comparações revelam que a proposta MIMO-Clustered-OFDM pode reduzir a capacidade do canal para atingir uma menor complexidade computacional, comparado ao MIMO-OFDMA. Por último, é introduzido um processo para analisar estatisticamente a degradação gerada pelo agrupamento de subportadoras contíguas para o uso de algoritmos de alocação de bits. Um estudo de caso com canais PLC revela que o critério aplicado para agrupar subportadoras contíguas pode proporcionar diferentes níveis de reduções de rendimento, bem como de outras perdas de desempenho se o tamanho do grupo é variável. / This thesis aims at investigating, proposing, and analyzing techniques to reduce the computational complexity of algorithms implemented in the physical layer of power line communication (PLC) transceivers which are based on orthogonal frequency division multiplexing (OFDM). First, the clustered-OFDM is investigated and analyzed to reduce computational complexity. Also, a trade between computational complexity and performance is demonstrated for clustered-OFDM and orthogonal frequency division multiple access (OFDMA). Performance results quantify what kind of tradeoff between computational complexity and capacity reduction can be achieved in comparison with OFDMA. Second, a clustered-OFDM scheme for 2×2 multiple input multiple output (MIMO) communication based on space time block code, named MIMO-clustered-OFDM, is proposed and analyzed. Comparison results reveal that the proposed MIMO-clustered-OFDM can trade capacity with computational complexity and can achieve lower computational complexity than MIMO-OFDMA. Third, a procedure to statistically analyze the degradation yielded by the use of granularity for grouping a set of contiguous subcarriers to be used by bitloading algorithm is introduced. A case study with PLC channels reveals that the criterion applied to group of contiguous subcarriers can offer different levels of throughput reductions and other performance losses if the size of the group is varied.
|
12 |
Dense Core Formation Simulations in Turbulent Molecular Clouds with Large Scale AnisotropyPetitclerc, Nicolas 03 1900 (has links)
<p> In this thesis, we study star formation in clustered environment within molecular
clouds using Smooth Particle Hydrodynamics (SPH) simulations. Our
first approach was to use "sink particles" to replace the dense gas particles
where stars are forming. We implemented this type of particle in GASOLINE,
and ran a simulation with a similar set of parameters to Bate et al. (2003).
We found a good general agreement with this study. However, this work
raised increasing concerns about some of the approximations used to follow
the fragmentation process over many orders of magnitude in density. Our first
issue was with the polytropic equation of state used to simulate gas of high
density, that we believe would require some form of radiative transfer to be
reliable. We also had concerns about the sink particles themselves, potentially
overestimating the accretion rates.</p> <p> This guided our following work, where we choose to avoid both sinks and polytropic assumptions; allowing us to concentrate on the role of turbulence in forming prestellar cores. Supersonic turbulence is known to decay
rapidly even when considering magnetic fields and gravity. However these
studies are based on grid codes for periodic boxes. Our simulations are not
periodic, they have open boundaries. Therefore the gravitational collapse can
occur for the whole molecular cloud, not only for small portions of it. Hence
the picture we observe in our self-gravitating turbulent molecular clouds is
different. We found that under gravitational collapse turbulence is naturally
developed and maintained with properties in good agreement with the current
observational and theoretical picture.</p> <p> We also compared the cores we formed with observations. We looked at several observable properties of cores: density profiles, velocity dispersion and rotation of the cores, core-core velocity dispersion, core-envelope velocity
dispersion, velocity dispersion vs. core size relation and the core mass function. We found a good general agreement between our simulated and observed
cores, which indicates that extra physics like magnetic fields, outflows, proper
equation of state or radiative transfer would have only secondary effects at
this formation stages, or would tend to cancel each other.</p> / Thesis / Doctor of Philosophy (PhD)
|
13 |
A Variance Estimator for Cohen’s Kappa under a Clustered Sampling DesignAbdel-Rasoul, Mahmoud Hisham 09 September 2011 (has links)
No description available.
|
14 |
Performance comparison between Clustered and Cascaded Clustered ShadingLevin, Adam, Bresche, Joakim January 2022 (has links)
Background. The game-industry is rapidly demanding more and more comput-ing power in its strive for more realistic renditions of environments, simulations andgraphics. To accelerate the improvements made to the realism of real time graph-ics further, optimizations like Clustered and Cascaded Clustered Shading come intoplay. The purpose of these techniques is to reduce the time it takes to render aframe by dividing the view frustum into smaller segments called clusters that canthen be used for light calculations. Cascaded Clustered Shading is a slightly morecustomizable method which aims to improve on Clustered Shading by allowing morecontrol over how the view frustum is divided into clusters. Objectives. The goal of our thesis is to explore the effectiveness of Cascaded Clus-tered Shading compared to Clustered Shading in a scene with 64, 256, 1024 and 4096lights respectively. It is also to find the trend of what type of subdivision pattern thatperforms best in what situation. Thus proving or disproving the theory that moreuniform cluster sizes are beneficial in reducing the complexity of light calculations incomparison to the increasing cluster sizes present in Clustered Shading. Methods. To answer these questions we implemented the techniques in a test scenewhere we could easily compare the performance of the different subdivision patternsand techniques with 64, 256, 1024 and 4096 lights respectively. Three different pat-terns were tested, one with an increasing number of subdivisions per layer P1 (anincrease in the number of clusters per layer). One with a static number of subdi-visions per layer P2, representing the performance of Clustered Shading. One witha decreasing number of subdivisions per layer P3. Additional performance metricsto be recorded were added, measuring the time it took for the different parts of thetechnique so that not just the general performance could be compared. Thus themethod used was a quantitative research method of implementation and experimen-tation. Results. The results supports the theory that more uniform cluster sizes tend tobe beneficial when rendering a scene with many lights showing a clear trend to favora pattern creating more uniform clusters P1. However the results also show a con-tradicting overall performance increase (comparing FPS) using the reversed patternwith sharply increasing cluster sizes based on the distance from the camera P3. Theoverall performance of pattern P1 and P3 was better than P2. Conclusions. The conclusions drawn from the results are that Cascaded ClusteredShading perform better than Clustered Shading in most cases depending on the pat-tern, and that more uniform cluster sizes are beneficial when rendering many lightsin most cases.
|
15 |
Instruction scheduling optimizations for energy efficient VLIW processorsPorpodas, Vasileios January 2013 (has links)
Very Long Instruction Word (VLIW) processors are wide-issue statically scheduled processors. Instruction scheduling for these processors is performed by the compiler and is therefore a critical factor for its operation. Some VLIWs are clustered, a design that improves scalability to higher issue widths while improving energy efficiency and frequency. Their design is based on physically partitioning the shared hardware resources (e.g., register file). Such designs further increase the challenges of instruction scheduling since the compiler has the additional tasks of deciding on the placement of the instructions to the corresponding clusters and orchestrating the data movements across clusters. In this thesis we propose instruction scheduling optimizations for energy-efficient VLIW processors. Some of the techniques aim at improving the existing state-of-theart scheduling techniques, while others aim at using compiler techniques for closing the gap between lightweight hardware designs and more complex ones. Each of the proposed techniques target individual features of energy efficient VLIW architectures. Our first technique, called Aligned Scheduling, makes use of a novel scheduling heuristic for hiding memory latencies in lightweight VLIW processors without hardware load-use interlocks (Stall-On-Miss). With Aligned Scheduling, a software-only technique, a SOM processor coupled with non-blocking caches can better cope with the cache latencies and it can perform closer to the heavyweight designs. Performance is improved by up to 20% across a range of benchmarks from the Mediabench II and SPEC CINT2000 benchmark suites. The rest of the techniques target a class of VLIW processors known as clustered VLIWs, that are more scalable and more energy efficient and operate at higher frequencies than their monolithic counterparts. The second scheme (LUCAS) is an improved scheduler for clustered VLIW processors that solves the problem of the existing state-of-the-art schedulers being very susceptible to the inter-cluster communication latency. The proposed unified clustering and scheduling technique is a hybrid scheme that performs instruction by instruction switching between the two state-of-the-art clustering heuristics, leading to better scheduling than either of them. It generates better performing code compared to the state-of-the-art for a wide range of inter-cluster latency values on the Mediabench II benchmarks. The third technique (called CAeSaR) is a scheduler for clustered VLIW architectures that minimizes inter-cluster communication by local caching and reuse of already received data. Unlike dynamically scheduled processors, where this can be supported by the register renaming hardware, in VLIWs it has to be done by the code generator. The proposed instruction scheduler unifies cluster assignment, instruction scheduling and communication minimization in a single unified algorithm, solving the phase ordering issues between all three parts. The proposed scheduler shows an improvement in execution time of up to 20.3% and 13.8% on average across a range of benchmarks from the Mediabench II and SPEC CINT2000 benchmark suites. The last technique, applies to heterogeneous clustered VLIWs that support dynamic voltage and frequency scaling (DVFS) independently per cluster. In these processors there are no hardware interlocks between clusters to honor the data dependencies. Instead, the scheduler has to be aware of the DVFS decisions to guarantee correct execution. Effectively controlling DVFS, to selectively decrease the frequency of clusters with slack in their schedule, can lead to significant energy savings. The proposed technique (called UCIFF) solves the phase ordering problem between frequency selection and scheduling that is present in existing algorithms. The results show that UCIFF produces better code than the state-of-the-art and very close to the optimal across the Mediabench II benchmarks. Overall, the proposed instruction scheduling techniques lead to either better efficiency on existing designs or allow simpler lightweight designs to be competitive against ones with more complex hardware.
|
16 |
Meta-Analytic Estimation Techniques for Non-Convergent Repeated-Measure Clustered DataWang, Aobo 01 January 2016 (has links)
Clustered data often feature nested structures and repeated measures. If coupled with binary outcomes and large samples (>10,000), this complexity can lead to non-convergence problems for the desired model especially if random effects are used to account for the clustering. One way to bypass the convergence problem is to split the dataset into small enough sub-samples for which the desired model convergences, and then recombine results from those sub-samples through meta-analysis. We consider two ways to generate sub-samples: the K independent samples approach where the data are split into k mutually-exclusive sub-samples, and the cluster-based approach where naturally existing clusters serve as sub-samples. Estimates or test statistics from either of these sub-sampling approaches can then be recombined using a univariate or multivariate meta-analytic approach. We also provide an innovative approach for simulating clustered and dependent binary data by simulating parameter templates that yield the desired cluster behavior. This approach is used to conduct simulation studies comparing the performance of the K independent samples and cluster-based approaches to generating sub-samples, the results from which are combined either with univariate and multivariate meta-analytic techniques. These studies show that using natural clusters leaded to lower biased test statistics when the number of clusters and treatment effect were large, as compared to the K independent samples approach for both the univariate and multivariate meta-analytic approaches. And the independent samples approach was preferred when the number of clusters and treatment effect were small. We also apply these methods to data on cancer screening behaviors obtained from electronic health records of n=15,652 individuals and showed that these estimated results support the conclusions from the simulation studies.
|
17 |
Characterization of cluster/monomer ratio in pulsed supersonic gas jetsGao, Xiaohui, doctor of physics 31 January 2013 (has links)
Cluster mass fraction is an elusive quantity to measure, calculate or estimate accurately for pulsed supersonic gas jets typical of intense laser experiments. The optimization of this parameter is critical for transient phase-matched harmonic generation in an ionized cluster jet at high laser intensity. We present an in-depth study of a rapid, noninvasive, single-shot optical method of determining cluster mass fraction f_c(r,t) at specified positions r within, and at time t after opening the valve of, a high-pressure pulsed supersonic gas jet. A ∼ 2 mJ fs pump pulse ionizes the monomers, causing an immediate drop in the jet’s refractive index n_jet proportional to monomer density, while simultaneously initiating hydrodynamic expansion of the clusters. The latter leads to a second drop in n_jet that is proportional to cluster density and is delayed by ∼ 1 ps. A temporally stretched probe pulse measures the 2-step index evolution in a single shot by frequency domain holography, enabling recovery of f_c. We present the theory behind recovery of f_c in detail. We also present extensive measurements of spatio-temporal profiles f_c(r, t) of cluster mass fraction in a high-pressure supersonic argon jet for various values of backing pressure P, and reservoir temperature T. / text
|
18 |
The impact of nonnormal and heteroscedastic level one residuals in partially clustered dataTalley, Anna Elizabeth 11 December 2013 (has links)
The multilevel model (MLM) is easily parameterized to handle partially clustered data (see, for example, Baldwin, Bauer, Stice, & Rohde, 2011). The current study evaluated the performance of this model under various departures from underlying assumptions, including assumptions of normally distributed and homoscedastic Level 1 residuals. Two estimating models – one assuming homoscedasticity, the other allowing for the estimation of unique Level 1 variance components – were compared. Results from a Monte Carlo simulation suggest that the MLM for partially clustered data yields consistently unbiased parameter estimates, except for an underestimation of the Level 2 variance component under heteroscedastic generating conditions. However, this negative parameter bias desisted when the MLM allowed for Level 1 heteroscedasticity. Standard errors for variance component estimates at both levels were underestimated in the presence of nonnormally distributed Level 1 residuals. A discussion of results, as well as suggestions for future research, is provided. / text
|
19 |
Models for Univariate and Multivariate Analysis of Longitudinal and Clustered DataLuo, Dandan Unknown Date
No description available.
|
20 |
SCOPE: Scalable Clustered Objects with Portable EventsMatthews, Christopher 27 September 2006 (has links)
Writing truly concurrent software is hard, scaling software to fully utilize hardware is
one of the reasons why. One abstraction for increasing the scalability of systems software is clustered objects. Clustered objects is a proven method of increasing scalability.
This thesis explores a user-level abstraction based on clustered objects which increases
hardware utilization without requiring any customization of the underlying system. We
detail the design, implementation and testing of Scalable Clustered Objects with Portable
Events or (SCOPE), a user-level system inspired by an implementation of the clustered objects model from IBM Research’s K42 operating system. To aid in the portability of the new system, we introduce the idea of a clustered object event, which is responsible for maintaining the runtime environment of the clustered objects. We show that SCOPE can increase scalability on a simple micro benchmark, and provide most of the benefits that the kernel-level implementation provided.
|
Page generated in 0.0669 seconds