Ugural, Suleyman Sadi
In this thesis, we established a cellular CDMA reverse channel model, which incorporates a time-invariant discrete multipath Nakagami-fading channel in a multiple-cell system. The effects of intra and inter-cell interference, perfect power control, lognormal shadowing and RAKE receiver with varying number of taps are investigated. For performance improvement forward error correction and smart antenna techniques are incorporated into the model. Expressions for probability of bit error are developed under a range of operating conditions and the model is tested using Monte Carlo Simulation. / Turkish Army author
Ugural, Suleyman Sadi.
(has links) (PDF)
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, September 2001. / Thesis advisor(s): Ha, Tri T.; Tighe, Jan E. "Septembeer 2001". Includes bibliographical references (p. 77-78). Also available online.
Evaluating the performance of large distributed applications is an important and non-trivial task. With the onset of Internet wide applications there is an increasing need to quantify reliability, dependability and performance of these systems, both as a guide in system design as well as a means to understand the fundamental properties of large-scale distributed systems. Previous research has mainly focused on either formalised models where system properties can be deduced and verified using rigorous mathematics or on measurements and experiments on deployed applications. Our aim in this thesis is to study models on an abstraction level lying between the two ends of this spectrum. We adopt a model of distributed systems inspired by methods used in the study of large scale system of particles in physics and model the application nodes as a set of interacting particles each with an internal state whose actions are specified by the application program. We apply our modeling and performance evaluation methodology to four different distributed and parallel systems. The first system is the distributed hash table (DHT) Chord running in a dynamic environment. We study the system under two scenarios. First we study how performance (in terms of lookup latency) is affectedon a network with finite communication latency. We show that an average delay in conjunction with other parameters describing changes in the network (such as timescales for network repair and join and leave processes)induces fundamentally different system performance. We also verify our analytical predictions via simulations.In the second scenario we introduce network address translators (NATs) to the network model. This makes the overlay topology non-transitive and we explore the implications of this fact to various performance metrics such as lookup latency, consistency and load balance. The latter analysis is mainly simulation based.Even though these two studies focus on a specific DHT, many of our results can easily be translated to other similar ring-based DHTs with long-range links, and the same methodology can be applied evento DHT's based on other geometries.The second type of system studied is an unstructured gossip protocol running a distributed version of the famous Belman-Ford algorithm. The algorithm, called GAP, generates a spanning tree over the participating nodes and the question we set out to study is how reliable this structure is(in terms of generating accurate aggregate values at the root) in the presence of node churn. All our analytical results are also verified using simulations.The third system studied is a content distribution network (CDN) of interconnected caches in an aggregation access network. In this model, content which sits at the leaves of the cache hierarchy tree, is requested by end users. Requests can then either be served by the first cache level or sent further up the tree. We study the performance of the whole system under two cache eviction policies namely LRU and LFU. We compare our analytical results with traces from related caching systems.The last system is a work stealing heuristic for task distribution in the TileraPro64 chip. This system has access to a shared memory and is therefore classified as a parallel system. We create a model for the dynamic generation of tasks as well as how they are executed and distributed among the participating nodes. We study how the heuristic scales when the number of nodes exceeds the number of processors on the chip as well as how different work stealing policies compare with each other. The work on this model is mainly simulation-based. / Att utvärdera prestanda hos storskaliga distribuerade system är en viktigoch icke-trivial uppgift. I och med utvecklingen av Internet och det faktum attapplikationer och system har fått global utsträckning, har det uppkommit ettökande behov av kvantifiering av tillförlitlighet och prestanda hos dessa system.Både som underlag för systemdesign men också för att skapa förståelseoch kunskap om fundamentala egenskaper hos distribuerade system.Tidigare forskning har i mångt och mycket fokuserat antingen på formaliserademodeller, där egenskaper kan härledas med hjälp av strikta matematiskametoder eller på mätningar av riktiga system. Målet med arbetet i dennaavhandling är att undersöka modeller på en abstraktionsnivå mellan dessa tvåytterligheter. Vi tillämpar en modell av distributerade system med inspirationfrån så kallade partikelmodeller från den teoretiska fysiken och modellererarapplikationsnoder som en samling interagerande pariklar var och en med sitteget interna tillstånd vars beteende beskrivs av det exekvernade programmeti fråga. Vi tillämpar denna modelerings- och utvärderingsmetod på fyra olikadistribuerade och parallella system.Det första systemet är den distribuerade hash tabellen (DHT) Chord i endynamisk miljö. Vi har valt att studera systemet under två scenarion. Förstutvärderar vi hur systemet beteer sig (med avseende på lookup latency) iett nätverk med finita kommunikationsfördröjningar. Vårt arbete visar atten generell fördröjning i nätet tillsammans med andra parametrar (som t.ex.tidsskala för felkorrektion och anslutningsprocess för noder) genererar fundamentaltskilda prestandamått. Vi verifierar vår analytiska model med simuleringar.I det andra scenariot undersöker vi betydelsen av NATs (networkadress translators) i nätverksmodellen. Förekomsten av dessa tar bort dentransitiva egenskapen hos nätverkstopologin och vi undersöker hur detta påverkarlookup-kostnad, datakonsistens och lastbalans. Denna analys är främst simuleringsbaserad.Även om dessa två studier fokuserar på en specifik DHT såkan de flesta resultat och metoden som sådan överföras på andra liknanderingbaserade DHTer med långa länkar och även andra geometrier.Den andra klassen av system som analyseras är ostrukturerade gossip protokolli form av den välkända Belman-Ford algoritmen. Algoritmen, GAP,skapar ett spännande träd över systemets noder. Problemställningen vi studerarär hur tillförlitlig denna struktur, med avseende på precisionen på aggregatvid rotnoden, är i ett dynamiskt nätverk. Samtliga analytiska resultatverifieras i simulator.Det tredje systemet vi undersöker är ett CDN (content distribution system)med en hierarkisk cache struktur i sitt distributionsnät. I den här modellenefterfrågas data från löven på cache-trädet. Antingen kan förfrågan servas avcacharna på de lägre nivåerna eller så skickas förfrågan vidare uppåt i trädet.Vi analyserar två fundamentala heuristiker, LRU och LFU. Vi jämför våraanalytiska resultat med tracedata från riktiga cachesystem.Till sist analyserar vi en heuristik för last distribution i TileraPro64 arkitekturen.Systemet har ett centralt delat minne och är därför att betrakta somparallellt. Vi skapar här en model för den dynamiska genereringen av lastsamt hur denna distribueras till de olika noderna på chipet. Vi studerar hur heuristiken skalar när antalet noder överstiger antalet på chipet (64) samtjämför prestanda hos olika heuristiker. Analysen är simuleringsbaserad. / <p>QC 20131128</p>
Charles, Nathan Richard
No description available.
25 January 2007
For the passed few years, bond funds have been becoming the most popular investment instrument for all investors. Because the values of those bonds invested by bond funds were not marked to market, the net asset values of bond funds may be greatly distorted. As the interest rates rise, most structured inverse-floating rate bonds invested by bond funds suffer enormous capital losses and greatly impact the bond fund yields. In order to protect the general public, the Financial Supervisory Commission (FSC) of R.O.C. requested all the bond funds to sell out all inverse-floating rate bonds at the fund company shareholders¡¦ expense to correct the distortion by the end of 2005. To understand the effect of these inverse-floating rate bonds on bond fund performance, this paper analyzes the relationship between the fund allocation and fund performance, using grouping analysis and rolling window method, to find out the most important factors affecting fund performance. Bond funds are found to generally allocate their asset in three categories - corporate bonds, government bond repurchases and short term deposits. It was found that how the fund allocates its asset has different effects to bond fund performance in different times. Before September 2005, funds with more corporate bonds performed better, but after September 2005, funds with more government bond repurchases performed better. There is no single asset allocation category always leads to superior fund performance. After further studying various bond funds portfolio rebalancing data, it is found that funds with better performance have smaller and slower portfolio adjustments after the FSC request. This may reflect a fact that the better funds are holding less structured products, such like inverse-floating rate bonds, so that they can maintain a better performance when reducing corporate fund holdings. Bond funds favored by the general public, indicated by having a greater increment in fund asset sizes during the period of April 2005 through August 2006, also show similar properties.
31 January 2012
This thesis examines the problem of scheduling with incomplete and/or local information in wireless systems. With large numbers of users and limited feedback resources, wireless systems require good scheduling algorithms to attain their performance limits. Classical studies on wireless scheduling investigate in much detail settings where the full state of the system is available when scheduling users. In contrast, this thesis focuses on the case where valuable network state information is lacking at the scheduler, and studies its resulting effect on system performance. The insights gained from the analysis are used to develop efficient wireless scheduling algorithms that operate with limited state information, and that guarantee high throughput and delay performance. The first part of the thesis considers scheduling for stability in a wireless downlink system, where a base station or server schedules transmissions to users, while acquiring channel state information from only subsets of users. It is shown that the system’s throughput region is completely characterized by the marginal channel statistics over observable channel subsets. Effective, queue-length based joint sampling and scheduling algorithms are developed that observe appropriate subsets of channels and schedule users, and the algorithms are shown to be optimal in the sense of throughput. Next, the thesis studies the queue-length performance of wireless scheduling algorithms that use only partial, subset-based channel state information. The chief objective here is to design partial information-based scheduling algorithms that keep the packet queues in the system short, and in this regard, the contributions of this thesis are twofold. First, from the algorithmic perspective, wireless scheduling algorithms using partial channel state information are designed that minimize the likelihood of queue overflow, in a suitable sense, across all partial information scheduling algorithms. The second key contribution is technical, by the development of novel analytical techniques to study the stochastic dynamics of partial state information-based algorithms. These techniques are not only instrumental in showing the optimality results above, but are also of independent interest in understanding the behavior of algorithms which rely on partially sampled system state. The second part of the thesis investigates coordinated inter-cell wireless scheduling across multiple base stations, each possessing only local and partial channel state information for its own users. Coordinated scheduling is necessary to mitigate interference between users in adjacent cells, but information sharing between the base stations is limited by high latencies in the backhauls that interconnect them. A class of distributed scheduling algorithms is developed in which the base stations share only delayed queue length information with each other, and locally acquire partial channel state information, to schedule users. These algorithms are shown to be throughput-optimal, and their average backlog performance in terms of the inter-base station latency is quantified. / text
19 April 2016
This study sought to understand how the application of a network analysis of rugby gameplay could inform coaches of their teams’ patterns of play in an effort to aid their teams’ performance. A qualitative case study utilizing open-ended interviews and a process of evaluation and constant comparison served as a guiding framework for this the data collection and data analysis methods incorporated during this study. Results of the study identified four key findings. First, incorporating elements of community based action research into the design of a case study provided the researcher with an opportunity to build effective working relationships with both participants. Second, providing coaches with effective feedback that informed them of their player’s performance was critical to the performance analysis (PA) process. Third, modifying the network analysis process to meet the participant’s needs was key in providing applicable analysis during the cases study. Fourth, performance analysts and coaches, like those in this case study, require video feedback, linked to the network analysis, if the network analysis process is to be considered informative. Finally, creating a PA process that is able to adapt to the coaches changing needs as well as the work cycles the organization proceeds through is a benefit of the NA process that we developed. / Graduate
Baek, Won-Seok, Lee, Daniel C.
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / We study the reliable (acknowledged) operation (i.e., ARQ scheme) of CFDP (CCSDS File Delivery Protocol) over single-hop space link. We focus on the immediate NAK mode, as specified in , under the assumption that PDU error events of forward and backward channels are statistically independent. We point out the problem of duplicated retransmissions due to the long propagation delay and analyze throughput efficiency. We also present modeling and analysis of the average time taken for the delivery of a file with an arbitrary size, which are more rigorous than currently available heuristics.
Featured with good scalability, modularity and large bandwidth, Network-on-Chip (NoC) has been widely applied in manycore Chip Multiprocessor (CMP) and Multiprocessor System-on-Chip (MPSoC) architectures. The provision of guaranteed service emerges as an important NoC design problem due to the application requirements in Quality-of-Service (QoS). Formal analysis of performance bounds plays a critical role in ensuring guaranteed service of NoC by giving insights into how the design parameters impact the network performance. The study in this thesis proposes analysis methods for delay and backlog bounds with Network Calculus (NC). Based on xMAS (eXecutable Micro-Architectural Speciﬁcation), a formal framework to model communication fabrics, the delay bound analysis procedure is presented using NC. The micro-architectural xMAS representation of a canonical on-chip router is proposed with both the data ﬂow and control ﬂow well captured. Furthermore, a well-deﬁned xMAS model for a speciﬁc application on an NoC can be created with network and ﬂow knowledge and then be mapped to corresponding NC analysis model for end-to-end delay bound calculation. The xMAS model eﬀectively bridges the gap between the informal NoC micro-architecture and the formal analysis model. Besides delay bound, the analysis of backlog bound is also crucial for predicting buﬀer dimensioning boundary in on-chip Virtual Channel (VC) routers. In this thesis, basic buﬀer use cases are identiﬁed with corresponding analysis models proposed so as to decompose the complex ﬂow contention in a network. Then we develop a topology independent analysis technique to convey the backlog bound analysis step by step. Algorithms are developed to automate this analysis procedure. Accompanying the analysis of performance bounds, tightness evaluation is an essential step to ensure the validity of the analysis models. However, this evaluation process is often a tedious, time-consuming, and manual simulation process in which many simulation parameters may have to be conﬁgured before the simulations run. In this thesis, we develop a heuristics aided tightness evaluation method for the analytical delay and backlog bounds. The tightness evaluation is abstracted as constrained optimization problems with the objectives formulated as implicit functions with respect to the system parameters. Based on the well-deﬁned problems, heuristics can be applied to guide a fully automated conﬁguration searching process which incorporates cycle-accurate bit-accurate simulations. As an example of heuristics, Adaptive Simulated Annealing (ASA) is adopted to guide the search in the conﬁguration space. Experiment results indicate that the performance analysis models based on NC give tight results which are eﬀectively found by the heuristics aided evaluation process even the model has a multidimensional discrete search space and complex constraints. In order to facilitate xMAS modeling and corresponding validation of the performance analysis models, the thesis presents an xMAS tool developed in Simulink. It provides a friendly graphical interface for xMAS modeling and parameter conﬁguring based on the powerful Simulink modeling environment. Hierarchical model build-up and Verilog-HDL code generation are essentially supported to manage complex models and conduct simulations. Attributed to the synthesizable xMAS library and the good extendibility, this xMAS tool has promising use in application speciﬁc NoC design based on the xMAS components. / <p>QC 20150520</p>
Technical performance on ATP top level, future level and Swedish youth national level male tennis tournaments : Notational analysis of point characteristics in three different tournaments on three different performance levelsHallgren, Frej January 2016 (has links)
Aim and research questions To investigate technical performance in three different tennis competitions (ATP Masters AM, Falu Future, FF & Swedish youth national championships, YNC) by collecting data of point characteristics. Are there any differences or similarities between the competitions analyzed concerning type of shots or shot combinations used, from which hitting zone on the tennis court the shots or shot combinations are hit and the placement of the different shots when scoring points? Are there any differences or similarities between the competitions analyzed concerning number of valid shots over the net in a rally? Are there any differences or similarities between the competitions analyzed concerning number of errors (forced and unforced) and winning shots committed in matches? Method The sample consisted of a total of 24 matches with 40 different players from three different tournaments which were analyzed using notational analysis software (Dartfish, version 8, Switzerland). Total number of points analyzed were 3154 (AM, n = 968, FF, n = 1068, YNC, n = 1118). Data were compiled in Excel (2013) and descriptive analyses were performed in IBM SPSS Statistics 24. Statistical analyses looking for overall significant differences between the groups were made using Chi square cross tab test. Due to the number of statistical tests that were performed for each domain in the post hoc test, an adjusted significance level of p < 0.001 was used to reduce the risk of Type 1 error. Results Significant differences were observed between groups for serve placement, shot used after hitting a serve, type of 2nd last and last shot used, hitting zone and placement by the point winner on last shots. Rallies of longer duration were significantly more frequent in the AM & FF groups compared to the YNC group. Concerning serve outcome, serve return, return placement, shot after serve placement, shot combinations, length on 2nd last and last shot, unforced, forced errors and winners no statistical differences were observed between groups. Conclusion This study indicates that higher demands are placed on placement accuracy in the ATP masters and Falu Future tournaments, specifically for the serve, but also for groundstrokes compared to the Swedish youth national championships tournament. This knowledge can be used to identify technical skills and physiological abilities that are important to practise in order to improve performance in tennis on different levels.
Page generated in 0.0594 seconds