Ugural, Suleyman Sadi
In this thesis, we established a cellular CDMA reverse channel model, which incorporates a time-invariant discrete multipath Nakagami-fading channel in a multiple-cell system. The effects of intra and inter-cell interference, perfect power control, lognormal shadowing and RAKE receiver with varying number of taps are investigated. For performance improvement forward error correction and smart antenna techniques are incorporated into the model. Expressions for probability of bit error are developed under a range of operating conditions and the model is tested using Monte Carlo Simulation. / Turkish Army author
Ugural, Suleyman Sadi.
(has links) (PDF)
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, September 2001. / Thesis advisor(s): Ha, Tri T.; Tighe, Jan E. "Septembeer 2001". Includes bibliographical references (p. 77-78). Also available online.
Evaluating the performance of large distributed applications is an important and non-trivial task. With the onset of Internet wide applications there is an increasing need to quantify reliability, dependability and performance of these systems, both as a guide in system design as well as a means to understand the fundamental properties of large-scale distributed systems. Previous research has mainly focused on either formalised models where system properties can be deduced and verified using rigorous mathematics or on measurements and experiments on deployed applications. Our aim in this thesis is to study models on an abstraction level lying between the two ends of this spectrum. We adopt a model of distributed systems inspired by methods used in the study of large scale system of particles in physics and model the application nodes as a set of interacting particles each with an internal state whose actions are specified by the application program. We apply our modeling and performance evaluation methodology to four different distributed and parallel systems. The first system is the distributed hash table (DHT) Chord running in a dynamic environment. We study the system under two scenarios. First we study how performance (in terms of lookup latency) is affectedon a network with finite communication latency. We show that an average delay in conjunction with other parameters describing changes in the network (such as timescales for network repair and join and leave processes)induces fundamentally different system performance. We also verify our analytical predictions via simulations.In the second scenario we introduce network address translators (NATs) to the network model. This makes the overlay topology non-transitive and we explore the implications of this fact to various performance metrics such as lookup latency, consistency and load balance. The latter analysis is mainly simulation based.Even though these two studies focus on a specific DHT, many of our results can easily be translated to other similar ring-based DHTs with long-range links, and the same methodology can be applied evento DHT's based on other geometries.The second type of system studied is an unstructured gossip protocol running a distributed version of the famous Belman-Ford algorithm. The algorithm, called GAP, generates a spanning tree over the participating nodes and the question we set out to study is how reliable this structure is(in terms of generating accurate aggregate values at the root) in the presence of node churn. All our analytical results are also verified using simulations.The third system studied is a content distribution network (CDN) of interconnected caches in an aggregation access network. In this model, content which sits at the leaves of the cache hierarchy tree, is requested by end users. Requests can then either be served by the first cache level or sent further up the tree. We study the performance of the whole system under two cache eviction policies namely LRU and LFU. We compare our analytical results with traces from related caching systems.The last system is a work stealing heuristic for task distribution in the TileraPro64 chip. This system has access to a shared memory and is therefore classified as a parallel system. We create a model for the dynamic generation of tasks as well as how they are executed and distributed among the participating nodes. We study how the heuristic scales when the number of nodes exceeds the number of processors on the chip as well as how different work stealing policies compare with each other. The work on this model is mainly simulation-based. / Att utvärdera prestanda hos storskaliga distribuerade system är en viktigoch icke-trivial uppgift. I och med utvecklingen av Internet och det faktum attapplikationer och system har fått global utsträckning, har det uppkommit ettökande behov av kvantifiering av tillförlitlighet och prestanda hos dessa system.Både som underlag för systemdesign men också för att skapa förståelseoch kunskap om fundamentala egenskaper hos distribuerade system.Tidigare forskning har i mångt och mycket fokuserat antingen på formaliserademodeller, där egenskaper kan härledas med hjälp av strikta matematiskametoder eller på mätningar av riktiga system. Målet med arbetet i dennaavhandling är att undersöka modeller på en abstraktionsnivå mellan dessa tvåytterligheter. Vi tillämpar en modell av distributerade system med inspirationfrån så kallade partikelmodeller från den teoretiska fysiken och modellererarapplikationsnoder som en samling interagerande pariklar var och en med sitteget interna tillstånd vars beteende beskrivs av det exekvernade programmeti fråga. Vi tillämpar denna modelerings- och utvärderingsmetod på fyra olikadistribuerade och parallella system.Det första systemet är den distribuerade hash tabellen (DHT) Chord i endynamisk miljö. Vi har valt att studera systemet under två scenarion. Förstutvärderar vi hur systemet beteer sig (med avseende på lookup latency) iett nätverk med finita kommunikationsfördröjningar. Vårt arbete visar atten generell fördröjning i nätet tillsammans med andra parametrar (som t.ex.tidsskala för felkorrektion och anslutningsprocess för noder) genererar fundamentaltskilda prestandamått. Vi verifierar vår analytiska model med simuleringar.I det andra scenariot undersöker vi betydelsen av NATs (networkadress translators) i nätverksmodellen. Förekomsten av dessa tar bort dentransitiva egenskapen hos nätverkstopologin och vi undersöker hur detta påverkarlookup-kostnad, datakonsistens och lastbalans. Denna analys är främst simuleringsbaserad.Även om dessa två studier fokuserar på en specifik DHT såkan de flesta resultat och metoden som sådan överföras på andra liknanderingbaserade DHTer med långa länkar och även andra geometrier.Den andra klassen av system som analyseras är ostrukturerade gossip protokolli form av den välkända Belman-Ford algoritmen. Algoritmen, GAP,skapar ett spännande träd över systemets noder. Problemställningen vi studerarär hur tillförlitlig denna struktur, med avseende på precisionen på aggregatvid rotnoden, är i ett dynamiskt nätverk. Samtliga analytiska resultatverifieras i simulator.Det tredje systemet vi undersöker är ett CDN (content distribution system)med en hierarkisk cache struktur i sitt distributionsnät. I den här modellenefterfrågas data från löven på cache-trädet. Antingen kan förfrågan servas avcacharna på de lägre nivåerna eller så skickas förfrågan vidare uppåt i trädet.Vi analyserar två fundamentala heuristiker, LRU och LFU. Vi jämför våraanalytiska resultat med tracedata från riktiga cachesystem.Till sist analyserar vi en heuristik för last distribution i TileraPro64 arkitekturen.Systemet har ett centralt delat minne och är därför att betrakta somparallellt. Vi skapar här en model för den dynamiska genereringen av lastsamt hur denna distribueras till de olika noderna på chipet. Vi studerar hur heuristiken skalar när antalet noder överstiger antalet på chipet (64) samtjämför prestanda hos olika heuristiker. Analysen är simuleringsbaserad. / <p>QC 20131128</p>
31 January 2012
This thesis examines the problem of scheduling with incomplete and/or local information in wireless systems. With large numbers of users and limited feedback resources, wireless systems require good scheduling algorithms to attain their performance limits. Classical studies on wireless scheduling investigate in much detail settings where the full state of the system is available when scheduling users. In contrast, this thesis focuses on the case where valuable network state information is lacking at the scheduler, and studies its resulting effect on system performance. The insights gained from the analysis are used to develop efficient wireless scheduling algorithms that operate with limited state information, and that guarantee high throughput and delay performance. The first part of the thesis considers scheduling for stability in a wireless downlink system, where a base station or server schedules transmissions to users, while acquiring channel state information from only subsets of users. It is shown that the system’s throughput region is completely characterized by the marginal channel statistics over observable channel subsets. Effective, queue-length based joint sampling and scheduling algorithms are developed that observe appropriate subsets of channels and schedule users, and the algorithms are shown to be optimal in the sense of throughput. Next, the thesis studies the queue-length performance of wireless scheduling algorithms that use only partial, subset-based channel state information. The chief objective here is to design partial information-based scheduling algorithms that keep the packet queues in the system short, and in this regard, the contributions of this thesis are twofold. First, from the algorithmic perspective, wireless scheduling algorithms using partial channel state information are designed that minimize the likelihood of queue overflow, in a suitable sense, across all partial information scheduling algorithms. The second key contribution is technical, by the development of novel analytical techniques to study the stochastic dynamics of partial state information-based algorithms. These techniques are not only instrumental in showing the optimality results above, but are also of independent interest in understanding the behavior of algorithms which rely on partially sampled system state. The second part of the thesis investigates coordinated inter-cell wireless scheduling across multiple base stations, each possessing only local and partial channel state information for its own users. Coordinated scheduling is necessary to mitigate interference between users in adjacent cells, but information sharing between the base stations is limited by high latencies in the backhauls that interconnect them. A class of distributed scheduling algorithms is developed in which the base stations share only delayed queue length information with each other, and locally acquire partial channel state information, to schedule users. These algorithms are shown to be throughput-optimal, and their average backlog performance in terms of the inter-base station latency is quantified. / text
Charles, Nathan Richard
No description available.
25 January 2007
For the passed few years, bond funds have been becoming the most popular investment instrument for all investors. Because the values of those bonds invested by bond funds were not marked to market, the net asset values of bond funds may be greatly distorted. As the interest rates rise, most structured inverse-floating rate bonds invested by bond funds suffer enormous capital losses and greatly impact the bond fund yields. In order to protect the general public, the Financial Supervisory Commission (FSC) of R.O.C. requested all the bond funds to sell out all inverse-floating rate bonds at the fund company shareholders¡¦ expense to correct the distortion by the end of 2005. To understand the effect of these inverse-floating rate bonds on bond fund performance, this paper analyzes the relationship between the fund allocation and fund performance, using grouping analysis and rolling window method, to find out the most important factors affecting fund performance. Bond funds are found to generally allocate their asset in three categories - corporate bonds, government bond repurchases and short term deposits. It was found that how the fund allocates its asset has different effects to bond fund performance in different times. Before September 2005, funds with more corporate bonds performed better, but after September 2005, funds with more government bond repurchases performed better. There is no single asset allocation category always leads to superior fund performance. After further studying various bond funds portfolio rebalancing data, it is found that funds with better performance have smaller and slower portfolio adjustments after the FSC request. This may reflect a fact that the better funds are holding less structured products, such like inverse-floating rate bonds, so that they can maintain a better performance when reducing corporate fund holdings. Bond funds favored by the general public, indicated by having a greater increment in fund asset sizes during the period of April 2005 through August 2006, also show similar properties.
10 October 2008
Delivering high-quality of video to end users over the best-effort Internet is a challenging task since quality of streaming video is highly subject to network conditions. A fundamental issue in this area is how real-time applications cope with network dynamics and adapt their operational behavior to offer a favorable streaming environment to end users. As an effort towards providing such streaming environment, the first half of this work focuses on analyzing the performance of video streaming in best-effort networks and developing a new streaming framework that effectively utilizes unequal importance of video packets in rate control and achieves a near-optimal performance for a given network packet loss rate. In addition, we study error concealment methods such as FEC (Forward-Error Correction) that is often used to protect multimedia data over lossy network channels. We investigate the impact of FEC on the quality of video and develop models that can provide insights into understanding how inclusion of FEC affects streaming performance and its optimality and resilience characteristics under dynamically changing network conditions. In the second part of this thesis, we focus on measuring bandwidth of network paths, which plays an important role in characterizing Internet paths and can benefit many applications including multimedia streaming. We conduct a stochastic analysis of an end-to-end path and develop novel bandwidth sampling techniques that can produce asymptotically accurate capacity and available bandwidth of the path under non-trivial cross-traffic conditions. In addition, we conduct comparative performance study of existing bandwidth estimation tools in non-simulated networks where various timing irregularities affect delay measurements. We find that when high-precision packet timing is not available due to hardware interrupt moderation, the majority of existing algorithms are not robust to measure end-to-end paths with high accuracy. We overcome this problem by using signal de-noising techniques in bandwidth measurement. We also develop a new measurement tool called PRC-MT based on theoretical models that simultaneously measures the capacity and available bandwidth of the tight link with asymptotic accuracy.
15 May 2009
1x Evolution-Data Optimized Revision A (1xEV-DO Rev. A) is a cellular communications standard that introduces key enhancements to the high data rate packet switched 1xEV-DO Release 0 standard. The enhancements are driven by the increasing demand on some applications that are delay sensitive and require symmetric data rates on the uplink and the downlink. Some examples of such applications being video telephony and voice over internet protocol (VoIP). The handoff operation is critical for delay sensitive applications because the mobile station (MS) is not supposed to lose service for long periods of time. Therefore seamless server selection is used in Rev. A systems. This research analyzes the performance of this handoff technique. A theoretical approach is presented to calculate the slot error probability (SEP). The approach enables evaluating the effects of filtering, hysteresis as well as the system introduced delay to handoff execution. Unlike previous works, the model presented in this thesis considers multiple base stations (BS) and accounts for correlation of shadow fading affecting different signal powers received from different BSs. The theoretical results are then verified over ranges of parameters of practical interest using simulations, which are also used to evaluate the packet error rate (PER) and the number of handoffs per second. Results show that the SEP gives a good indication about the PER. Results also show that when considering practical handoff delays, moderately large filter constants are more efficient than smaller ones.
24 July 2004
In mobile communications networks, a location management scheme is responsible for tracking mobile users. Typically, a location management scheme consists of a location update scheme and a paging scheme. Gau and Haas first proposed the concurrent search(CS) approach that could simultaneously locate a number of mobile users in mobile communications networks. We propose to use the theory of the discrete-time Markov chain to analyze the performance of the concurrent search approach. In particular, we concentrate on the worst case in which each mobile user appears equally likely in all the cells of the network. We analyze the average paging delay, the call blocking probability and the system size. We show that our analytical results are consistent with the simulation results of the concurrent search.
Soysa, Madushanka Dinesh
No description available.
Page generated in 0.0946 seconds