• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 253
  • 98
  • 21
  • 16
  • 11
  • 9
  • 9
  • 9
  • 8
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 521
  • 521
  • 91
  • 78
  • 77
  • 67
  • 64
  • 57
  • 55
  • 53
  • 51
  • 38
  • 37
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Performance estimation of wireless networks using traffic generation and monitoring on a mobile device

Tiemeni, Ghislaine Livie Ngangom January 2015 (has links)
In this study, a traffic generator software package namely MTGawn was developed to run packet generation and evaluation on a mobile device. The call generating software system is able to: simulate voice over Internet protocol calls as well as user datagram protocol and transmission control protocol between mobile phones over a wireless network and analyse network data similar to computer-based network monitoring tools such as Iperf and D-ITG but is self-contained on a mobile device. This entailed porting a ‘stripped down’ version of a packet generation and monitoring system with functionality as found in open source tools for a mobile platform. This mobile system is able to generate and monitor traffic over any network interface on a mobile device, and calculate the standard quality of service metrics. The tool was compared to a computer–based tool namely distributed Internet traffic generator (D-ITG) in the same environment and, in most cases, MTGawn reported comparable results to D-ITG. The important motivation for this software was to ease feasibility testing and monitoring in the field by using an affordable and rechargeable technology such as a mobile device. The system was tested in a testbed and can be used in rural areas where a mobile device is more suitable than a PC or laptop. The main challenge was to port and adapt an open source packet generator to an Android platform and to provide a suitable touchscreen interface for the tool. / >Magister Scientiae - MSc
32

Performance Modeling of Multi-core Systems : Caches and Locks

Pan, Xiaoyue January 2016 (has links)
Performance is an important aspect of computer systems since it directly affects user experience. One way to analyze and predict performance is via performance modeling. In recent years, multi-core systems have made processors more powerful while keeping power consumption relatively low. However the complicated design of these systems makes it difficult to analyze performance. This thesis presents performance modeling techniques for cache performance and synchronization cost on multi-core systems. A cache can be designed in many ways with different configuration parameters including cache size, associativity and replacement policy. Understanding cache performance under different configurations is useful to explore the design choices. We propose a general modeling framework for estimating the cache miss ratio under different cache configurations, based on the reuse distance distribution. On multi-core systems, each core usually has a private cache. Keeping shared data in private caches coherent has an extra cost. We propose three models to estimate this cost, based on information that can be gathered when running the program on a single core. Locks are widely used as a synchronization primitive in multi-threaded programs on multi-core systems. While they are often necessary for protecting shared data, they also introduce lock contention, which causes performance issues. We present a model to predict how much contention a lock has on multi-core systems, based on information obtainable from profiling a run on a single core. If lock contention is shown to be a performance bottleneck, one of the ways to mitigate it is to use another lock implementation. However, it is costly to investigate if adopting another lock implementation would reduce lock contention since it requires reimplementation and measurement. We present a model for forecasting lock contention with another lock implementation without replacing the current lock implementation.
33

Analytical modelling of scheduling schemes under self-similar network traffic : traffic modelling and performance analysis of centralized and distributed scheduling schemes

Liu, Lei January 2010 (has links)
High-speed transmission over contemporary communication networks has drawn many research efforts. Traffic scheduling schemes which play a critical role in managing network transmission have been pervasively studied and widely implemented in various practical communication networks. In a sophisticated communication system, a variety of applications co-exist and require differentiated Quality-of-Service (QoS). Innovative scheduling schemes and hybrid scheduling disciplines which integrate multiple traditional scheduling mechanisms have emerged for QoS differentiation. This study aims to develop novel analytical models for commonly interested scheduling schemes in communication systems under more realistic network traffic and use the models to investigate the issues of design and development of traffic scheduling schemes. In the open literature, it is commonly recognized that network traffic exhibits self-similar nature, which has serious impact on the performance of communication networks and protocols. To have a deep study of self-similar traffic, the real-world traffic datasets are measured and evaluated in this study. The results reveal that selfsimilar traffic is a ubiquitous phenomenon in high-speed communication networks and highlight the importance of the developed analytical models under self-similar traffic. The original analytical models are then developed for the centralized scheduling schemes including the Deficit Round Robin, the hybrid PQGPS which integrates the traditional Priority Queueing (PQ) and Generalized Processor Sharing (GPS) schemes, and the Automatic Repeat reQuest (ARQ) forward error control discipline in the presence of self-similar traffic. Most recently, research on the innovative Cognitive Radio (CR) techniques in wireless networks is popular. However, most of the existing analytical models still employ the traditional Poisson traffic to examine the performance of CR involved systems. In addition, few studies have been reported for estimating the residual service left by primary users. Instead, extensive existing studies use an ON/OFF source to model the residual service regardless of the primary traffic. In this thesis, a PQ theory is adopted to investigate and model the possible service left by selfsimilar primary traffic and derive the queue length distribution of individual secondary users under the distributed spectrum random access protocol.
34

Aspects of Design and Analysis of Cognitive Radios and Networks

Hanif, Muhammad Fainan January 2010 (has links)
Recent survey campaigns have shown a tremendous under utilization of the bandwidth allocated to various wireless services. Motivated by this and the ever increasing demand for wireless applications, the concept of cognitive radio (CR) systems has rendered hope to end the so called spectrum scarcity. This thesis presents various different facets related to the design and analysis of CR systems in a unified way. We begin the thesis by presenting an information theoretic study of cognitive systems working in the so called low interference regime of the overlay mode. We show that as long as the coverage area of a CR is less than that of a primary user (PU) device, the probability of the cognitive terminal inflicting small interference at the PU is overwhelmingly high. We have also analyzed the effect of a key parameter governing the amount of power allocated to relaying the PU message in the overlay mode of operation in realistic environments by presenting a simple and accurate approximation. Then, we explore the possibilities of statistical modeling of the cumulative interference due to multiple interfering CRs. We show that although it is possible to obtain a closed form expression for such an interference due a single CR, the problem is particularly difficult when it comes to the total CR interference in lognormally faded environments. In particular, we have demonstrated that fitting a two or three parameter lognormal is not a feasible option for all scenarios. We also explore the second-order characteristics of the cumulative interference by evaluating its level crossing rate (LCR) and average exceedance duration (AED) in Rayleigh and Rician channel conditions. We show that the LCRs in both these cases can be evaluated by modeling the interference process with gamma and noncentral χ2 processes, respectively. By exploiting radio environment map (REM) information, we have presented two CR scheduling schemes and compared their performance with the naive primary exclusion zone (PEZ) technique. The results demonstrate the significance of using an intelligent allocation method to reap the benefits of the tremendous information available to exploit in the REM based methods. At this juncture, we divert our attention to multiple-input multiple-output (MIMO) CR systems operating in the underlay mode. Using an antenna selection philosophy, we solve a convex optimization problem accomplishing the task and show via analysis and simulations that antenna selection can be a viable option for CRs operating in relatively sparse PU environments. Finally, we study the impact of imperfect channel state information (CSI) on the downlink of an underlay multiple antenna CR network designed to achieve signal-to-interference-plus-noise ratio (SINR) fairness among the CR terminals. By employing a newly developed convex iteration technique, we solve the relevant optimization problem exactly without performing any relaxation on the variables involved.
35

Análise da performance dos fundos de investimentos em ações no Brasil / Performance analysis of equity mutual funds in Brazil

Laes, Marco Antonio 02 December 2010 (has links)
O objetivo desta dissertação é analisar a performance da indústria de fundos de investimentos em ações no Brasil. Alvo de poucos estudos no mercado nacional, a análise do desempenho da gestão de carteiras se faz cada vez mais importante, dado o avanço, ao longo dos últimos anos, dos fundos de investimentos como destino da poupança privada brasileira. As análises tradicionais, em que é testada individualmente a significância do alfa (intercepto) de regressões dos retornos dos fundos utilizando-se geralmente o CAPM ou o modelo de Fama-French (ou alguma variante destes), sofrem de diversos problemas, como a provável não-normalidade dos erros (73,8% em nossa amostra), e a não-consideração da correlação entre os alfas dos diversos fundos invalidando-se inferências tradicionais. O maior problema desta abordagem, porém, é que se ignora o fato de que, dentro de um universo grande de fundos, espera-se que alguns destes apresentem desempenho superior não por uma gestão diferenciada de suas carteiras, mas por mera sorte. A fim de superar esta dificuldade, o presente estudo, utilizando uma amostra de 812 fundos de ações durante o período 2002-2009 (incluindo-se fundos sobreviventes e não-sobreviventes), simulou a distribuição cross-sectional dos alfas (e de suas respectivas estatística-t) destes fundos através de técnicas de bootstrap, buscando-se com este procedimento eliminar o fator sorte nas análises. Os resultados foram de acordo com a literatura internacional, apresentando evidências da existência de pouquíssimos fundos com performance superior de fato, ao passo que um grande número de fundos apresentou um desempenho negativo, não por azar, mas por real gestão inferior. / The purpose of this dissertation is to examine the performance of the equity mutual funds industry in Brazil. Object of few studies in the national market, the performance analysis of active management has become increasingly more important, given the advance, especially over the last few years, of mutual funds as a destination of the Brazilian private savings. The traditional analysis, where the significance of the alpha (the intercept) from regressions of funds returns is tested individually, using generally the CAPM or the Fama-French model (or some variant of these), suffer from a large array of problems, from the non-normality of errors (73.8% in our sample) to the non-consideration of the correlation between the alphas of the various funds, invalidating the traditional inferences. The biggest problem regarding this approach, however, is that it ignores the fact that, in a large universe of funds, its expected that some funds will present superior performance not from differentiated management, but for mere luck. In order to address these shortcomings, the present study, using an extensive sample of 812 equity mutual funds during the 2002-2009 period (both surviving and non-surviving funds), simulates the cross-sectional distribution of alphas (and its-statistics) through bootstrap techniques, aiming with this procedure to eliminate the luck factor in the analysis. The results were in accordance with the international literature, showing evidences that only a few funds present actual superior performance, and a large number of funds present actual negative performance, not because they were unlucky, but due to inferior management.
36

Hardware design and performance analysis for cryptographic sponge BlaMka. / Projeto de hardware e análise de desempenho para a exponja criptográfica BlaMka.

Rossetti, Jônatas Faria 19 May 2017 (has links)
To evaluate the performance of a hardware design, it is necessary to select the met- rics of interest. Several metrics can be chosen, but in general three of them are considered basic: area, latency, and power. From these, other metrics of practical interest such as throughput and energy consumption can be obtained. These metrics relate to one another by creating trade-offs that designers need to know to execute the best design decisions. Some works address optimized hardware design for improving one of these metrics. In other works, optimizations are made for two of them. Others analyze the trade-off between two of these metrics. However, the literature lacks of works that analyze the behavior of three metrics together. In this work, we intend to contribute to bridge this gap, proposing a method that allow analyzing trade-offs among area, power, and throughput. To verify the proposed method, the permutation function of crypto- graphic sponge BlaMka was chosen as a case study. No hardware implementation has been found for this algorithm yet. Therefore, an additional contribution is to provide its first hardware design. Combinational and sequential circuits were designed and synthesized for ASIC and FPGA. With the synthesis results, a detailed performance analysis was performed for each platform, starting from a one-dimensional analysis, going through a two-dimensional analysis, and culminating in a three-dimensional analysis. Two techniques were presented for such analysis, namely projections approach and planes approach. Although there is room for improvement, the proposed method is a initial step showing that, in fact, a trade-off between three metrics can be analyzed, and that it is also possible to find balanced performance points. From the two approaches presented, it was possible to derive a criterion to select optimizations when we have restrictions, such as a desired throughput range or a maximum physical size, and when we do not have restrictions, in which case we can choose the optimization with the most balanced performance. / Para avaliar o desempenho de um projeto de hardware, é necessário selecionar as métricas de interesse. Várias métricas podem ser escolhidas, mas em geral três delas são consideradas básicas: área, latência e potência. A partir delas, podem ser obtidas outras métricas de interesse prático, tais como vazão e consumo de energia. Essas métricas relacionam-se entre si, criando trade-offs que os projetistas precisam conhecer para executar as melhores decisões de projeto. Alguns trabalhos abordam o projeto de hardware otimizado para melhorar uma dessas métricas. Em outros trabalhos, as otimizações são feitas para duas delas, mas sem analisar como uma terceira métrica se relaciona com as demais. Outros analisam o trade-off entre duas dessas métricas. Entretanto, a literatura carece de trabalhos que analisem o comportamento de três métricas em conjunto. Neste trabalho, pretendemos contribuir para preencher essa lacuna, propondo um método que permita a análise de trade-offs entre área, potência e vazão. Para verificar o método proposto, foi escolhida a função de permutação da esponja criptográfica BlaMka como estudo de caso. Até o momento, nenhuma implementação em hardware foi encontrada para esse algoritmo. Dessa forma, uma contribuição adicional é apresentar seu primeiro projeto de hardware. Circuitos combinacionais e sequenciais foram projetados e sintetizados para ASIC e FPGA. Com os resultados de síntese, foi realizada uma análise de desempenho detalhada para cada plataforma, a partir de uma análise unidimensional, passando por uma análise bidimensional e culminando em uma análise tridimensional. Duas técnicas foram apresentadas para tal análise tridimensional, chamadas abordagem das projeções e abordagem dos planos. Embora passível de melhorias, o método apresentado é um passo inicial mostrando que, de fato, um trade-off entre três métricas pode ser analisado, e que também é possível encontrar pontos de desempenho balanceado. A partir das duas abordagens, foi possível derivar um critério para selecionar otimizações quando há restrições, como um faixa de vazão desejada ou um tamanho físico máximo, e quando não há restrições, caso em que é possível escolher a otimização com o desempenho mais balanceado.
37

Business Intelligence inom sport : Hur används business intelligence/performanceanalysis inom fotboll och hur tillämpas det inom elitfotbollen i Göteborgsregionen? / Business Intelligence in sport : How is business intelligence/performance analysis used in football and how does it apply to the elitefootball in the Gothenburg region?

Iqbali, Ali January 2019 (has links)
Performance analysis är ett väldigt diskuterat ämne, den kan användas i samband med alla sporter och användningen utav det har blivit mycket populär den senaste tiden.Fotbollsorganisationer runt om i världen använder sig av det i samband med träningar och matcher för att utveckla sina spelare men även laget i helhet. Syftet med denna studie är att undersöka hur elit-fotbollslagen i Göteborgsregionen använder sig av performance analysis i kombination med träningar och matcher.Frågeställningen som denna undersökning har som syfte att besvara är:Fråga 1.”Hur används business intelligence/performance analysis inom fotboll och hur tillämpas det inom elitfotbollen i Göteborgsregionen? ”För att besvara frågeställningen tillämpas två olika datainsamlingsmetoder, en systematisk litteraturstudie och kvalitativa intervjuer. Detta för att få information från litteraturen om hur dessa två parter har kombinerats samt vilka fördelar och utmaningar som skapades vid kombinationen. Men även få en inblick om vad personer med kunskap om två parter tycker om en kombination, vad de tycker en kombination kan bidra till men också vilka utmaningar som kan förekomma med kombinationen.Undersökningen visar tydligt att det finns fördelar med att kombinera business intelligence/ performance analysis och fotboll. De fördelar som diskuteras kontinuerligt är bättre utveckling för fotbollsspelarna, möjligheten om att mäta spelarprestationen,bättre underlag för beslutsfattning. Det fanns fotbollslag som hade svårt med att införa nya system på grund av olika skäl som inte kunde mäta spelarprestationen. Att kombinera performance analysis och fotboll förstärker laget i helhet för att laget har tillräckligt med information när det handlar om spelarna och deras förutsättningar. Det bidrar till att fotbollslag kan träna upp spelare och utveckla spelare själva istället för att lägga miljonbelopp på andra spelare.
38

Hardware design and performance analysis for cryptographic sponge BlaMka. / Projeto de hardware e análise de desempenho para a exponja criptográfica BlaMka.

Jônatas Faria Rossetti 19 May 2017 (has links)
To evaluate the performance of a hardware design, it is necessary to select the met- rics of interest. Several metrics can be chosen, but in general three of them are considered basic: area, latency, and power. From these, other metrics of practical interest such as throughput and energy consumption can be obtained. These metrics relate to one another by creating trade-offs that designers need to know to execute the best design decisions. Some works address optimized hardware design for improving one of these metrics. In other works, optimizations are made for two of them. Others analyze the trade-off between two of these metrics. However, the literature lacks of works that analyze the behavior of three metrics together. In this work, we intend to contribute to bridge this gap, proposing a method that allow analyzing trade-offs among area, power, and throughput. To verify the proposed method, the permutation function of crypto- graphic sponge BlaMka was chosen as a case study. No hardware implementation has been found for this algorithm yet. Therefore, an additional contribution is to provide its first hardware design. Combinational and sequential circuits were designed and synthesized for ASIC and FPGA. With the synthesis results, a detailed performance analysis was performed for each platform, starting from a one-dimensional analysis, going through a two-dimensional analysis, and culminating in a three-dimensional analysis. Two techniques were presented for such analysis, namely projections approach and planes approach. Although there is room for improvement, the proposed method is a initial step showing that, in fact, a trade-off between three metrics can be analyzed, and that it is also possible to find balanced performance points. From the two approaches presented, it was possible to derive a criterion to select optimizations when we have restrictions, such as a desired throughput range or a maximum physical size, and when we do not have restrictions, in which case we can choose the optimization with the most balanced performance. / Para avaliar o desempenho de um projeto de hardware, é necessário selecionar as métricas de interesse. Várias métricas podem ser escolhidas, mas em geral três delas são consideradas básicas: área, latência e potência. A partir delas, podem ser obtidas outras métricas de interesse prático, tais como vazão e consumo de energia. Essas métricas relacionam-se entre si, criando trade-offs que os projetistas precisam conhecer para executar as melhores decisões de projeto. Alguns trabalhos abordam o projeto de hardware otimizado para melhorar uma dessas métricas. Em outros trabalhos, as otimizações são feitas para duas delas, mas sem analisar como uma terceira métrica se relaciona com as demais. Outros analisam o trade-off entre duas dessas métricas. Entretanto, a literatura carece de trabalhos que analisem o comportamento de três métricas em conjunto. Neste trabalho, pretendemos contribuir para preencher essa lacuna, propondo um método que permita a análise de trade-offs entre área, potência e vazão. Para verificar o método proposto, foi escolhida a função de permutação da esponja criptográfica BlaMka como estudo de caso. Até o momento, nenhuma implementação em hardware foi encontrada para esse algoritmo. Dessa forma, uma contribuição adicional é apresentar seu primeiro projeto de hardware. Circuitos combinacionais e sequenciais foram projetados e sintetizados para ASIC e FPGA. Com os resultados de síntese, foi realizada uma análise de desempenho detalhada para cada plataforma, a partir de uma análise unidimensional, passando por uma análise bidimensional e culminando em uma análise tridimensional. Duas técnicas foram apresentadas para tal análise tridimensional, chamadas abordagem das projeções e abordagem dos planos. Embora passível de melhorias, o método apresentado é um passo inicial mostrando que, de fato, um trade-off entre três métricas pode ser analisado, e que também é possível encontrar pontos de desempenho balanceado. A partir das duas abordagens, foi possível derivar um critério para selecionar otimizações quando há restrições, como um faixa de vazão desejada ou um tamanho físico máximo, e quando não há restrições, caso em que é possível escolher a otimização com o desempenho mais balanceado.
39

A Non-Uniform User Distribution and its Performance Analysis on K-tier Heterogeneous Cellular Networks Using Stochastic Geometry

Li, Chao 07 February 2019 (has links)
In the cellular networks, to support the increasing data rate requirements, many base stations (BSs) with low transmit power and small coverage area are deployed in addition to classical macro cell BSs. Low power nodes, such as micro, pico, and femto nodes (indoor and outdoor), which complement the conventional macro networks, are placed primarily to increase capacity in hotspots (such as shopping malls and conference centers) and to enhance coverage of macro cells near the cell boundary. Combining macro and small cells results in heterogeneous networks (HetNets). An accurate node (BS or user equipment (UE)) model is important in the research, design, evaluation, and deployment of 5G HetNets. The distance between transmitter (TX), receiver (RX), and interferer determines the received signal power and interference signal power. Therefore, the spatial placement of BSs and UEs greatly impacts the performance of cellular networks. However, the investigation on the spatial distribution of UE is limited, though there is ample research on the topic of the spatial distribution of BS. In HetNets, UEs tend to cluster around BSs or social attractors (SAs). The spatial distribution of these UEs is non-uniform. Therefore, the analysis of the impact of non-uniformity of UE distribution on HetNets is essential for designing efficient HetNets. This thesis presents a non-uniform user distribution model based on the existing K-tier BS distribution. Our proposed non-uniform user distribution model is such that a Poisson cluster process with the cluster centers located at SAs in which SAs have a base station offset with their BSs. There are two parameters (cluster radius and base station offset) the combination of which can cover many possible non-uniformity. The heterogeneity analysis of the proposed nonuniform user distribution model is also given. The downlink performance analysis of the designed non-uniform user model is investigated. The numerical results show that our theoretical results closely match the simulation results. Moreover, the effect of BS parameters of small cells such as BS density, BS cell extension bias factor, and BS transmit power is included. At the same time, the uplink coverage probability by the theoretical derivation is also analyzed based on some simplifying assumptions as a result of the added complexity of the uplink analysis due to the UEs’ mobile position and the uplink power control. However, the numerical results show a small gap between the theoretical results and the simulation results, suggesting that our simplifying assumptions are acceptable if the system requirement is not very strict. In addition to the effect of BS density, BS cell extension bias factor, and BS transmit power, the effect of fractional power control factor in the uplink is also introduced. The comparison between the downlink and the uplink is discussed and summarized at the end. The main goal of this thesis is to develop a comprehensive framework of the non-uniform user distribution in order to produce a tractable analysis of HetNets in the downlink and the uplink using the tools of stochastic geometry
40

Efficient Methods for Manufacturing System Analysis and Design

Gershwin, Stanley B., Maggio, Nicola, Matta, Andrea, Tolio, Tullio, Werner, Loren M. 01 1900 (has links)
The goal of the research described here is to develop tools to assist the rapid analysis and design of manufacturing systems. The methods we describe are based on mathematical models of production systems. We combine earlier work on the decomposition method for factory performance prediction and design with the hedging point method for scheduling. We propose an approach that treats design and operation in a unified manner. The models we study take many of the most important features and phenomena in factories into account, including random failures and repairs of machines, finite buffers, random demand, production lines, assembly and disassembly, imperfect yield, and token-based control policies. / Singapore-MIT Alliance (SMA)

Page generated in 0.1044 seconds