• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11050
  • 6478
  • 1491
  • 1136
  • 921
  • 728
  • 538
  • 467
  • 439
  • 389
  • 256
  • 225
  • 159
  • 153
  • 132
  • Tagged with
  • 29786
  • 3760
  • 3574
  • 2555
  • 2167
  • 1981
  • 1844
  • 1802
  • 1749
  • 1456
  • 1453
  • 1400
  • 1353
  • 1330
  • 1310
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Analyzing Storage System Workloads

Sikalinda, Paul 01 June 2006 (has links)
Analysis of storage system workloads is important for a number of reasons. The analysis might be performed to understand the usage patterns of existing storage systems. It is very important for the architects to understand the usage patterns when designing and developing a new, or improving upon the existing design of a storage system. It is also important for a system administrator to understand the usage patterns when configuring and tuning a storage system. The analysis might also be performed to determine the relationship between any two given workloads. Before a decision is taken to pool storage resources to increase the throughput, there is need to establish whether the different workloads involved are correlated or not. Furthermore, the analysis of storage system workloads can be done to monitor the usage and to understand the storage requirements and behavior of system and application software. Another very important reason for analyzing storage system workloads, is the need to come up with correct workload models for storage system evaluation. For the evaluation, based on simulations or otherwise, to be reliable, one has to analyze, understand and correctly model the workloads. In our work we have developed a general tool, called ESSWA (Enterprize Storage System Workload Analyzer) for analyzing storage system workloads, which has a number of advantages over other storage system workload analyzers described in literature. Given a storage system workload in the form of an I/O trace file containing data for the workload parameters, ESSWA gives statistics of the data. From the statistics one can derive mathematical models in the form of probability distribution functions for the workload parameters. The statistics and mathematical models describe only the particular workload for which they are produced. This is because storage system workload characteristics are sensitive to the file system and buffer pool design and implementation, so that the results of any analysis are less broadly applicable. We experimented with ESSWA by analyzing storage system workloads represented by three sets of I/O traces at our disposal. Our results, among other things show that: I/O request sizes are influenced by the operating system in use; the start addresses of I/O requests are somewhat influenced by the application; and the exponential probability density function, which is often used in simulation of storage systems to generate inter-arrival times of I/O requests, is not the best model for that purpose in the workloads that we analyzed. We found the Weibull, lognormal and beta probability density functions to be better models.
562

A Hardware Testbed for Measuring IEEE 802.11g DCF Performance

Symington, Andrew 01 April 2009 (has links)
The Distributed Coordination Function (DCF) is the oldest and most widely-used IEEE 802.11 contention-based channel access control protocol. DCF adds a significant amount of overhead in the form of preambles, frame headers, randomised binary exponential back-off and inter-frame spaces. Having accurate and verified performance models for DCF is thus integral to understanding the performance of IEEE 802.11 as a whole. In this document DCF performance is measured subject to two different workload models using an IEEE 802.11g test bed. Bianchi proposed the first accurate analytic model for measuring the performance of DCF. The model calculates normalised aggregate throughput as a function of the number of stations contending for channel access. The model also makes a number of assumptions about the system, including saturation conditions (all stations have a fixed-length packet to send at all times), full-connectivity between stations, constant collision probability and perfect channel conditions. Many authors have extended Bianchi's machine model to correct certain inconsistencies with the standard, while very few have considered alternative workload models. Owing to the complexities associated with prototyping, most models are verified against simulations and not experimentally using a test bed. In addition to a saturation model we considered a more realistic workload model representing wireless Internet traffic. Producing a stochastic model for such a workload was a challenging task, as usage patterns change significantly between users and over time. We implemented and compared two Markov Arrival Processes (MAPs) for packet arrivals at each client - a Discrete-time Batch Markovian Arrival Process (D-BMAP) and a modified Hierarchical Markov Modulated Poisson Process (H-MMPP). Both models had parameters drawn from the same wireless trace data. It was found that, while the latter model exhibits better Long Range Dependency at the network level, the former represented traces more accurately at the client-level, which made it more appropriate for the test bed experiments. A nine station IEEE 802.11 test bed was constructed to measure the real world performance of the DCF protocol experimentally. The stations used IEEE 802.11g cards based on the Atheros AR5212 chipset and ran a custom Linux distribution. The test bed was moved to a remote location where there was no measured risk of interference from neighbouring radio transmitters in the same band. The DCF machine model was fixed and normalised aggregate throughput was measured for one through to eight contending stations, subject to (i) saturation with fixed packet length equal to 1000 bytes, and (ii) the D-BMAP workload model for wireless Internet traffic. Control messages were forwarded on a separate wired backbone network so that they did not interfere with the experiments. Analytic solver software was written to calculate numerical solutions for thee popular analytic models for DCF and compared the solutions to the saturation test bed experiments. Although the normalised aggregate throughput trends were the same, it was found that as the number of contending stations increases, so the measured aggregate DCF performance diverged from all three analytic model's predictions; for every station added to the network normalised aggregate throughput was measured lower than analytically predicted. We conclude that some property of the test bed was not captured by the simulation software used to verify the analytic models. The D-BMAP experiments yielded a significantly lower normalised aggregate throughput than the saturation experiments, which is a clear result of channel underutilisation. Although this is a simple result, it highlights the importance of the traffic model on network performance. Normalised aggregate throughput appeared to scale more linearly when compared to the RTS/CTS access mechanism, but no firm conclusion could be drawn at 95% confidence. We conclude further that, although normalised aggregate throughput is appropriate for describing overall channel utilisation in the steady state, jitter, response time and error rate are more important performance metrics in the case of bursty traffic.
563

On the Performance Analysis of Large Scale, Dynamic, Distributed and Parallel Systems.

Ardelius, John January 2013 (has links)
Evaluating the performance of large distributed applications is an important and non-trivial task. With the onset of Internet wide applications there is an increasing need to quantify reliability, dependability and performance of these systems, both as a guide in system design as well as a means to understand the fundamental properties of large-scale distributed systems. Previous research has mainly focused on either formalised models where system properties can be deduced and verified using rigorous mathematics or on measurements and experiments on deployed applications. Our aim in this thesis is to study models on an abstraction level lying between the two ends of this spectrum. We adopt a model of distributed systems inspired by methods used in the study of large scale system of particles in physics and model the application nodes as a set of interacting particles each with an internal state whose actions are specified by the application program. We apply our modeling and performance evaluation methodology to four different distributed and parallel systems. The first system is the distributed hash table (DHT) Chord running in a dynamic environment.  We study the system under two scenarios. First we study how performance (in terms of lookup latency) is affectedon a network with finite communication latency. We show that an average delay in conjunction with other parameters describing changes in the network (such as timescales for network repair and join and leave processes)induces fundamentally different system performance. We also verify our analytical predictions via simulations.In the second scenario we introduce network address translators (NATs) to the network model. This makes the overlay topology non-transitive and we explore the implications of this fact to various performance metrics such as lookup latency, consistency and load balance. The latter analysis is mainly simulation based.Even though these two studies focus on a specific DHT, many of our results can easily be translated to other similar ring-based DHTs with long-range links, and the same methodology can be applied evento DHT's based on other geometries.The second type of system studied is an unstructured gossip protocol running a distributed version of the famous Belman-Ford algorithm. The algorithm, called GAP, generates a spanning tree over the participating nodes and the question we set out to study is how reliable this structure is(in terms of generating accurate aggregate values at the root)  in the presence of node churn. All our analytical results are also verified  using simulations.The third system studied is a content distribution network (CDN) of interconnected caches in an aggregation access network. In this model, content which sits at the leaves of the cache hierarchy tree, is requested by end users. Requests can then either be served by the first cache level or sent further up the tree. We study the performance of the whole system under two cache eviction policies namely LRU and LFU. We compare our analytical results with traces from related caching systems.The last system is a work stealing heuristic for task distribution in the TileraPro64 chip. This system has access to a shared memory and is therefore classified as a parallel system. We create a model for the dynamic generation of tasks as well as how they are executed and distributed among the participating nodes. We study how the heuristic scales when the number of nodes exceeds the number of processors on the chip as well as how different work stealing policies compare with each other. The work on this model is mainly simulation-based. / Att utvärdera prestanda hos storskaliga distribuerade system är en viktigoch icke-trivial uppgift. I och med utvecklingen av Internet och det faktum attapplikationer och system har fått global utsträckning, har det uppkommit ettökande behov av kvantifiering av tillförlitlighet och prestanda hos dessa system.Både som underlag för systemdesign men också för att skapa förståelseoch kunskap om fundamentala egenskaper hos distribuerade system.Tidigare forskning har i mångt och mycket fokuserat antingen på formaliserademodeller, där egenskaper kan härledas med hjälp av strikta matematiskametoder eller på mätningar av riktiga system. Målet med arbetet i dennaavhandling är att undersöka modeller på en abstraktionsnivå mellan dessa tvåytterligheter. Vi tillämpar en modell av distributerade system med inspirationfrån så kallade partikelmodeller från den teoretiska fysiken och modellererarapplikationsnoder som en samling interagerande pariklar var och en med sitteget interna tillstånd vars beteende beskrivs av det exekvernade programmeti fråga. Vi tillämpar denna modelerings- och utvärderingsmetod på fyra olikadistribuerade och parallella system.Det första systemet är den distribuerade hash tabellen (DHT) Chord i endynamisk miljö. Vi har valt att studera systemet under två scenarion. Förstutvärderar vi hur systemet beteer sig (med avseende på lookup latency) iett nätverk med finita kommunikationsfördröjningar. Vårt arbete visar atten generell fördröjning i nätet tillsammans med andra parametrar (som t.ex.tidsskala för felkorrektion och anslutningsprocess för noder) genererar fundamentaltskilda prestandamått. Vi verifierar vår analytiska model med simuleringar.I det andra scenariot undersöker vi betydelsen av NATs (networkadress translators) i nätverksmodellen. Förekomsten av dessa tar bort dentransitiva egenskapen hos nätverkstopologin och vi undersöker hur detta påverkarlookup-kostnad, datakonsistens och lastbalans. Denna analys är främst simuleringsbaserad.Även om dessa två studier fokuserar på en specifik DHT såkan de flesta resultat och metoden som sådan överföras på andra liknanderingbaserade DHTer med långa länkar och även andra geometrier.Den andra klassen av system som analyseras är ostrukturerade gossip protokolli form av den välkända Belman-Ford algoritmen. Algoritmen, GAP,skapar ett spännande träd över systemets noder. Problemställningen vi studerarär hur tillförlitlig denna struktur, med avseende på precisionen på aggregatvid rotnoden, är i ett dynamiskt nätverk. Samtliga analytiska resultatverifieras i simulator.Det tredje systemet vi undersöker är ett CDN (content distribution system)med en hierarkisk cache struktur i sitt distributionsnät. I den här modellenefterfrågas data från löven på cache-trädet. Antingen kan förfrågan servas avcacharna på de lägre nivåerna eller så skickas förfrågan vidare uppåt i trädet.Vi analyserar två fundamentala heuristiker, LRU och LFU. Vi jämför våraanalytiska resultat med tracedata från riktiga cachesystem.Till sist analyserar vi en heuristik för last distribution i TileraPro64 arkitekturen.Systemet har ett centralt delat minne och är därför att betrakta somparallellt. Vi skapar här en model för den dynamiska genereringen av lastsamt hur denna distribueras till de olika noderna på chipet. Vi studerar hur heuristiken skalar när antalet noder överstiger antalet på chipet (64) samtjämför prestanda hos olika heuristiker. Analysen är simuleringsbaserad. / <p>QC 20131128</p>
564

Primus Theatre: Establishing an Alternative Model for Creating Theatre in English Canada

Borody, Claire 11 December 2013 (has links)
This study of Primus Theatre is evidence of many things. First and foremost it is a long overdue print recognition of Primus Theatre's substantial artistic accomplishments and its important contribution to the development of theatre-making in English-speaking Canada. In examining the various factors contributing to the founding of the theatre company and the extremely challenging conditions in which company members functioned over the years, it remains truly remarkable that Primus Theatre existed at all. Three central determinations emerge from the examination of Primus Theatre's practice. The theatre company truly was a pioneering venture in English Canada. Company members established an "as-if-permanent" ensemble that engaged in the creation of original performance work drawn from research that emerged from their regular training practice. The company adopted a theatre-making practice generated by the Odin Theatre in Denmark and then adapted it to vastly different cultural and fiscal contexts. It can also be determined that the origins of the company are inextricably bound to Artistic Director Richard Fowler's personal artistic journey. His strong sense of the creative and communal potential for theatre not only fuelled his own creative journey but also inspired National Theatre School students to launch their own acts of courage. The third determination arising from this study is that, while all aspects of Primus Theatre's creative practice can be linked to that of the Odin Theatre, this relationship can most accurately be described as an imprinting, rather than as an extension, of Odin Theatre practices. The conscious and unconscious permutation and advancement of the practice, driven by the technical and creative needs and interests of the young Canadian company and deeply affected by substantial financial hardships and creative set-backs, forced Primus to emerge as a unique theatrical entity developing from a particular and identifiable geneology. This study of the establishment of Primus Theatre also provides evidence that the substantial hardships faced by company members did not dissuade them from advancing their practice of continued exploration of form and expression. The study provides evidence not only of Primus Theatre's substantial body of creative work but also of its substantial pedagogical efforts. Subsequently, a new generation of theatre artists has been inspired by and trained in this alternative theatre-making model, and are making their own contributions to the continued redefinition of theatre in English Canada.
565

Sliding bearings in highway bridges and elevated roads

Taylor, M. E. January 1975 (has links)
No description available.
566

An experimental and theoretical study of unsteady gas exchange characteristics for a two-stroke cycle engine

Ashe, M. C. January 1975 (has links)
No description available.
567

An experimental study of the performance of variable reluctance type stepping motors

Rahman, M. F. January 1978 (has links)
No description available.
568

Induction machines with unlaminated rotors

Sambath, H. P. January 1976 (has links)
No description available.
569

The effectiveness of incentive payment systems : an empirical test of individualism as a boundary condition

Clark, David Gregory January 1992 (has links)
Incentive payment systems became more widely used by companies in the 1980s; their acceptance was supported by the predictions of theorists in disciplines such as economics and social psychology. These theoretical traditions have for the most part proceeded separately, but we argue, there is potential for combining these insights of different traditions to improve the predictive power of models of incentive pay. To this end, this study demonstrates the potential of an interdisciplinary approach to modelling incentive pay. Closer inspection of current models finds that they are founded on assumptions of rational economic man, including calculative individualism. In practice, however, these assumptions often do not hold. We hypothesize that explicitly specifying individualistic values among employees as a boundary condition for the successful operation of incentive pay systems can improve models' predictive power. Our hypotheses are tested by reference to a data set of the opinions of 1240 employees in 14 companies across England and Wales. An incentive pay model was found to have greater predictive power among relatively individualistic employees than among those of relatively collectivistic value sets. In addition, the incidence of an incentive pay system was associated with more effort being supplied among individualistic employees, but there was no significant difference in the effort supplied by collectivistic employees whether or not they are covered by an incentive pay system.
570

A study of methods to overcome manufacturing lead time instability

Johns, Stuart Lionel January 1992 (has links)
No description available.

Page generated in 0.0827 seconds