Spelling suggestions: "subject:"largescale"" "subject:"largerscale""
851 |
Methods for Path loss PredictionAkkasli, Cem January 2009 (has links)
Large scale path loss modeling plays a fundamental role in designing both fixed and mobile radio systems. Predicting the radio coverage area of a system is not done in a standard manner. Wireless systems are expensive systems. Therefore, before setting up a system one has to choose a proper method depending on the channel environment, frequency band and the desired radio coverage range. Path loss prediction plays a crucial role in link budget analysis and in the cell coverage prediction of mobile radio systems. Especially in urban areas, increasing numbers of subscribers brings forth the need for more base stations and channels. To obtain high efficiency from the frequency reuse concept in modern cellular systems one has to eliminate the interference at the cell boundaries. Determining the cell size properly is done by using an accurate path loss prediction method. Starting from the radio propagation phenomena and basic path loss models this thesis aims at describing various accurate path loss prediction methods used both in rural and urban environments. The Walfisch-Bertoni and Hata models, which are both used for UHF propagation in urban areas, were chosen for a detailed comparison. The comparison shows that the Walfisch-Bertoni model, which involves more parameters, agrees with the Hata model for the overall path loss.
|
852 |
Automated Bus Generation for Multi-processor SoC DesignRyu, Kyeong Keol 12 July 2004 (has links)
In the design of a multi-processor System-on-a-Chip (SoC), the bus architecture typically comes to the forefront because the system performance is not dependent only on the speed of the Processing Elements (PEs) but also on the bus architecture in the system. An efficient bus architecture with effective arbitration for reducing contention on the bus plays an important role in maximizing performance. Therefore, among many issues of multi-processor SoC research, we focus on two issues related to the bus architecture in this dissertation. One issue is how to quickly and easily design an efficient bus architecture for an SoC. The second issue is how to quickly explore the design space across performance influencing factors to achieve a high performance bus system.
The objective of this research is to provide a Computer-Aided Design (CAD) tool with which the user can quickly explore System-on-a-Chip (SoC) bus design space in search of a high performance SoC bus system. From a straightforward description of the numbers and types of Processing Elements (PEs), non-PEs, memories and buses (including, for example, the address and data bus widths of the buses and memories), our Bus Synthesis tool, called BusSynth, generates a Register-Transfer Level (RTL) Verilog Hardware Description Language (HDL) description of the specified bus system. The user can utilize this RTL Verilog in bus-accurate simulations to more quickly arrive at an efficient bus architecture for a multi-processor SoC.
The methodology we propose gives designers a great benefit in fast design space exploration of bus systems across a variety of performance influencing factors such as bus types, PE types and software programming styles (e.g., pipelined parallel fashion or functional parallel fashion). We also show that BusSynth can efficiently generate bus systems in a matter of seconds as opposed to weeks of design effort to integrate together each system component by hand. Moreover, unlike the previous related work, BusSynth can support a wide variety of PEs, memory types and bus architectures (including a hybrid bus architecture) in search of a high performance SoC.
|
853 |
Designing Secure and Robust Distribted and Pervasive Systems with Error Correcting CodesPaul, Arnab 11 February 2005 (has links)
This thesis investigates the role of error-correcting codes in
Distributed and Pervasive Computing. The main results are at the
intersection of Security and Fault Tolerance for these
environments. There are two primary areas that are explored in this
thesis.
1. We have investigated protocols for large scale fault tolerant
secure distributed storage. The two main concerns here are security
and redundancy. In one arm of this research we developed SAFE, a
distributed storage system based on a new protocol that offers a
two-in-one solution to fault-tolerance and confidentiality. This
protocol is based on cryptographic properties of error correction
codes. In another arm, we developed esf, another prototype
distributed persistent storage; esf facilitates seamless hardware
extension of storage units, high resilience to loads and provides
high availability. The main ingredient in its design is a modern
class of erasure codes known as the {em Fountain Codes}. One
problem in such large storage is the heavy overhead of the
associated fingerprints needed for checking data integrity. esf
deploys a clever integrity check mechanism by use of a data
structure known as the {em Merkle Tree} to address this issue.
2. We also investigated the design of a new remote
authentication protocol. Applications over long range wireless would
benefit quite a bit from this design. We designed and implemented
LAWN, a lightweight remote authentication protocol for wireless
networks that deploys a randomized approximation scheme based on
Error correcting codes. We have evaluated in detail the performance
of LAWN; while it adds very low overhead of computation, the savings
in bandwidth and power are quite dramatic.
|
854 |
A Second Generation Generic Systems Simulator (GENESYS) for a Gigascale System-on-a-Chip (SoC).Nugent, Steven Paul 14 April 2005 (has links)
Future opportunities for gigascale integration will be governed by a hierarchy of theoretical and practical limits that can be codified as follows: fundamental, material, device, circuit, and system. An exponential increase in on-chip integration is driving System-on-Chip (SoC) methodologies as a dominant design solution for gigascale ICs. Therefore, a second generation generic systems simulator (GENESYS) is developed to address a need for rapid assessment of technology/architecture tradeoffs for multi-billion transistor SoCs while maintaining the depth of core modeling codified in the hierarchy of limits. A newly developed system methodology incorporates a hiearchical block-based model, a dual interconnect distribution for both local and global interconnects, a generic on-chip bus model, and cell placement algorithms. A comparison of simulation results for five commercial SoC implementations shows increased accuracy in predicting die size, clock frequency, and total power dissipation. ITRS projections for future technology requirments are applied with results indicating that increasing static power dissipation is a key impediment to making continued improvements in chip performance. Additionally, simulations of a generic chip multi-processor architecture utilizing several interconnect schemes shows that the most promising candidate for the future of on-chip global interconnect networks will be hierarchical bus structures providing a high degree of connectivity while maintaining high operating frequencies.
|
855 |
Turkish Experience In Privatization: The Privatizations Of Large-scale State-economic Enterprises In The 2000sAngin, Merih 01 August 2010 (has links) (PDF)
Privatization, which is the most important component of neo-liberal policies since the 1980s, has been legitimized by the neo-liberal doctrine through a purely economic and technical terminology. Contrary to this, this thesis maintains that privatization is a highly political process, shaped by intertwined class- and identity-based interests in different countries.
To support this argument, the thesis makes a comparative analysis of the privatizations of large-scale state economic enterprises in Turkey in the 2000s, namely Petrol Ofisi, TÜ / PRAS, ERDEMIR, Tü / rk Telekom and PETKIM, as part of the neo-liberal transformation of the Turkish state. It concludes that the privatizations of large-scale SEEs in Turkey represent typical examples to what David Harvey terms as &ldquo / accumulation by dispossession&rdquo / throughout which wealth has been transferred from the laboring classes to capital by the active involvement of the state though the Turkish experience has its own historical specificities. Political preferences made by governments in charge since the late 1990s in general and by the Islamist AKP government after 2002 in particular have to be understood to make sense of these specificities.
|
856 |
Wave Component Sampling Method For High Performance Pipelined CircuitsSever, Refik 01 September 2011 (has links) (PDF)
In all of the previous pipelining methods such as conventional pipelining, wave pipelining, and mesochronous pipelining, a data wave propagating on the combinational circuit is sampled whenever it arrives to a synchronization stage. In this study, a new wave-pipelining methodology named as Wave Component Sampling Method (WCSM), is proposed. In this method, only the component of a wave, whose maximum and minimum delay difference exceeds the tolerable value, is sampled, and the other components continue to propagate on the circuit. Therefore, the total number of registers required for synchronization decreases significantly. For demonstrating the effectiveness of the proposed WCSM, an 8x8 bit carry save In all of the previous pipelining methods such as conventional pipelining, wave pipelining, and mesochronous pipelining, a data wave propagating on the combinational circuit is sampled whenever it arrives to a synchronization stage. In this study, a new wave-pipelining methodology named as Wave Component Sampling Method (WCSM), is proposed. In this method, only the component of a wave, whose maximum and minimum delay difference exceeds the tolerable value, is sampled, and the other components continue to propagate on the circuit. Therefore, the total number of registers required for synchronization decreases significantly. For demonstrating the effectiveness of the proposed WCSM, an 8x8 bit carry save adder (CSA) multiplier is implemented using 0.18µ / m CMOS technology. A generic transmission gate logic block with optimized output delay variation depending on the input pattern is designed and used in all of the sub blocks of the multiplier. Post layout simulation results show that, this multiplier can operate at a speed of 3GHz, using only 70 latches. Comparing with the mesochronous pipelining scheme, the number of the registers is decreased by 41% and the total power of the chip is also decreased by 9.5% without any performance loss. An ultra high speed full pipelined CSA multiplier with an operating frequency of 5GHz is also implemented with WCSM. The number of registers is decreased by 45%, and the power consumption of the circuit is decreased by 18.4% comparing with conventional or mesochronous pipelining methods. WCSM is also applied to different multiplier structures employing booth encoders, Wallace trees, and carry look-ahead adders. Comparing full pipelined 8x8 bit WCSM multiplier with the conventional pipelined multiplier, the number of registers in the implementation of booth encoder, Wallace tree, and carry look-ahead adder is decreased by 30%, 51%, and %62, respectively.
|
857 |
Large scale group network optimizationShim, Sangho 17 November 2009 (has links)
Every knapsack problem may be relaxed to a cyclic group problem. In 1969, Gomory found the subadditive characterization of facets of the master cyclic group problem. We simplify the subadditive relations by the substitution of complementarities and discover a minimal representation of the subadditive polytope for the master cyclic group problem. By using the minimal representation, we characterize the vertices of cardinality length 3 and implement the shooting experiment from the natural interior point.
The shooting from the natural interior point
is a shooting from the inside of the plus level set of the subadditive polytope. It induces the shooting for the knapsack problem. From the shooting experiment for the knapsack problem
we conclude that the most hit facet is the knapsack mixed integer cut which is the 2-fold lifting of a mixed integer cut.
We develop a cutting plane algorithm augmenting cutting planes generated by shooting, and implement it on Wong-Coppersmith digraphs observing that only small number of cutting planes
are enough to produce the optimal solution. We discuss a relaxation of shooting as a clue to quick shooting. A max flow model on covering space
is shown to be equivalent to the dual of shooting linear programming problem.
|
858 |
Wireless receiver designs: from information theory to VLSI implementationZhang, Wei Zhang 06 October 2009 (has links)
Receiver design, especially equalizer design, in communications is a major concern in both academia and industry. It is a problem with both theoretical challenges and severe implementation hurdles. While much research has been focused on reducing complexity for optimal or near-optimal schemes, it is still common practice in industry to use simple techniques (such as linear equalization) that are generally significantly inferior. Although digital signal processing (DSP) technologies have been applied to wireless communications to enhance the throughput, the users' demands for more data and higher rate have revealed new
challenges. For example, to collect the diversity and combat fading channels, in addition to the transmitter designs that enable the diversity, we also require the receiver to be able to collect the prepared diversity.
Most wireless transmissions can be modeled as a linear block transmission system. Given a linear block transmission model assumption, maximum likelihood equalizers (MLEs) or near-ML decoders have been adopted at the receiver to collect diversity which is an important metric for performance, but these decoders exhibit high complexity. To reduce the decoding complexity, low-complexity equalizers, such as linear equalizers (LEs) and
decision feedback equalizers (DFEs) are often adopted. These methods, however, may not utilize the diversity enabled by the transmitter and as a result have degraded performance compared to
MLEs.
In this dissertation, we will present efficient receiver designs that achieve low bit-error-rate (BER), high mutual information, and low decoding complexity. Our approach is
to first investigate the error performance and mutual information of existing low-complexity equalizers to reveal the fundamental condition to achieve full diversity with LEs. We show that the fundamental condition for LEs to collect the same (outage) diversity as MLE is that the channels need to be constrained within a certain distance from orthogonality. The orthogonality deficiency (od) is adopted to quantify the distance of channels to orthogonality while other existing metrics are also introduced and compared. To meet the fundamental condition and achieve full diversity, a hybrid equalizer framework is proposed. The performance-complexity trade-off of hybrid equalizers is quantified by deriving the distribution of od.
Another approach is to apply lattice reduction (LR) techniques to improve the ``quality' of channel matrices. We present two widely adopted LR methods in wireless communications, the Lenstra-Lenstra-Lovasz (LLL) algorithm [51] and Seysen's algorithm (SA), by providing detailed descriptions and pseudo codes. The properties of output matrices of the LLL algorithm and SA are also quantified. Furthermore, other LR algorithms are also briefly introduced.
After introducing LR algorithms, we show how to adopt them into the wireless communication decoding process by presenting LR-aided hard-output detectors and LR-aided soft-output detectors for coded systems, respectively. We also analyze the performance of proposed efficient receivers from the perspective of diversity, mutual information, and complexity. We prove that LR techniques help to restore the diversity of low-complexity equalizers without increasing the complexity significantly.
When it comes to practical systems and simulation tool, e.g., MATLAB, only finite bits are adopted to represent numbers. Therefore, we revisit the diversity analysis for finite-bit represented systems. We illustrate that the diversity of MLE for systems with finite-bit representation is determined by the number of non-vanishing eigenvalues. It is also shown that although theoretically LR-aided detectors collect the same diversity as MLE in the real/complex field, it may show different diversity orders when finite-bit representation exists. Finally, the VLSI implementation of the complex LLL algorithms is provided to verify the practicality of our proposed designs.
|
859 |
Upwelling and cross-shelf transport dynamics along the Pacific Eastern BoundaryCombes, Vincent 06 July 2010 (has links)
The upwelling and cross-shelf transport dynamics along the Pacific Eastern Boundary is explored using a high resolution ocean model for the last 60 years. Three ocean circulations have been modeled. From North to South, we investigate the dynamics of the Gulf of Alaska (GOA), the California Current System (CCS) and the Humboldt Current System (HCS, also known as the Peru-Chile Current System). The statistics of coastal waters transport are computed using a model passive tracer, which is continuously released at the coast. By looking at the passive tracer concentration distribution, we find that the Pacific Decadal Oscillation modulates the coastal variability of the GOA, the North Pacific Gyre Oscillation controls the upwelling of the CCS, while the El-Niño Southern Oscillation affects the upwelling of Peru and Chile mainly through coastally trapped Kelvin waves. Results also emphasize the key role of the mesoscale eddies in the offshore transport of coastal waters masses. The passive tracer experiments, performed in this study in the GOA, CCS, and HCS, therefore could provide a dynamical framework to understand the dynamics of the upwelling/downwelling and offshore transport of nutrient rich coastal water and to interpret how it responds to atmospheric forcing. This also could reinforce our interpretation (and therefore predictions) in the changes in vertical and offshore advection of other important biogeochemical quantities, essential in understanding ecosystem variability.
|
860 |
Management of building energy consumption and energy supply network on campus scaleLee, Sang Hoon 19 January 2012 (has links)
Building portfolio management on campus and metropolitan scale involves decisions about energy retrofits, energy resource pooling, and investments in shared energy systems, such as district cooling, community PV and wind power, CHP systems, geothermal systems etc. There are currently no tools that help a portfolio/campus manager make these decisions by rapid comparison of variants. The research has developed an energy supply network management tool at the campus scale. The underlying network energy performance (NEP) model uses (1) an existing energy performance toolkit to quantify the energy performance of building energy consumers on hourly basis, and (2) added modules to calculate hourly average energy generation from a wide variety of energy supply systems.
The NEP model supports macro decisions at the generation side (decisions about adding or retrofitting campus wide systems) and consumption side (planning of new building design and retrofit measures). It allows testing different supply topologies by inspecting which consumer nodes should connect to which local suppliers and to which global suppliers, i.e. the electricity and gas utility grids. A prototype software implementation allows a portfolio or campus manager to define the demand and supply nodes on campus scale and manipulate the connections between them through a graphical interface. The NEP model maintains the network topology which is represented by a directed graph with the supply and demand nodes as vertices and their connections as arcs. Every change in the graph automatically triggers an update of the energy generation and consumption pattern, the results of which are shown on campus wide energy performance dashboards.
The dissertation shows how the NEP model supports decision making with respect to large-scale building energy system design with a case study of the Georgia Tech campus evaluating the following three assertions: 1. The normative calculations at the individual building scale are accurate enough to support the network energy performance analysis 2. The NEP model supports the study of the tradeoffs between local building retrofits and campus wide energy interventions in renewable systems, under different circumstances 3. The NEP approach is a viable basis for routine campus asset management policies.
|
Page generated in 0.0472 seconds