• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 13
  • 13
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Quality Selection for Dynamic Adaptive Streaming over HTTP with Scalable Video Coding

Andelin, Travis L. 07 December 2011 (has links) (PDF)
Video streaming on the Internet is increasingly using Dynamic Adaptive Streaming over HTTP (DASH), in which the video is converted into various quality levels and divided into two-second segments. A client can then adjust its video quality over time by choosing to download the appropriate quality level for a given segment using standard HTTP. Scalable Video Coding (SVC) is a promising enhancement to the DASH protocol. With SVC, segments are divided into subset bitstream blocks. At playback, blocks received for a given segment are combined to additively increase the current quality. Unlike traditional DASH, which downloads segments serially, this encoding creates a large space of possible ways to download a video; for example, if given a variable download rate, when should the client try to maximize the current segment's video quality, and when should it instead play it safe and ensure a minimum level of quality for future segments? In this work, we examine the impact of SVC on the client's quality selection policy, with the goal of maximizing a performance metric quantifying user satisfaction. We use acombination of analysis, dynamic programming, and simulation to show that, in many cases, a client should use a diagonal quality selection policy, balancing both of the aforementioned concerns, and that the slope of the best policy flattens out as the variation in download rateincreases.
2

From partner selection to collaboration in information sharing multi-agent systems

Park, Jisun 01 June 2010 (has links)
This research advances distributed information sharing by equipping nodes (e.g., software agents) in a distributed network with (1) partner selection algorithms in cooperative environments, and (2) strategies for providing and requesting information in competitive environments. In cooperative environments, information providers are willing to provide requested information, but information consumers must consider uncertainty in the quality of provided information when selecting appropriate information providers. In competitive environments, if a self-interested agent can be an information consumer and provider at the same time, agents need to determine the best ways to request and provide information so that the information acquisition utility can be maximized. This research defines a set of metrics for evaluating information acquisition utility, and presents a game-theoretic approach for determining the best information sharing strategies based on stochastic games. The results show how agents build collaborative relationships with appropriate agents and how the information acquisition utility is affected by those relationships. / text
3

Reducing Long Tail Latencies in Geo-Distributed Systems

Bogdanov, Kirill January 2016 (has links)
Computing services are highly integrated into modern society. Millions of people rely on these services daily for communication, coordination, trading, and accessing to information. To meet high demands, many popular services are implemented and deployed as geo-distributed applications on top of third party virtualized cloud providers. However, the nature of such deployment provides variable performance characteristics. To deliver high quality of service, such systems strive to adapt to ever-changing conditions by monitoring changes in state and making run-time decisions, such as choosing server peering, replica placement, and quorum selection. In this thesis, we seek to improve the quality of run-time decisions made by geo-distributed systems. We attempt to achieve this through: (1) a better understanding of the underlying deployment conditions, (2) systematic and thorough testing of the decision logic implemented in these systems, and (3) by providing a clear view into the network and system states which allows these services to perform better-informed decisions. We performed a long-term cross datacenter latency measurement of the Amazon EC2 cloud provider. We used this data to quantify the variability of network conditions and demonstrated its impact on the performance of the systems deployed on top of this cloud provider. Next, we validate an application’s decision logic used in popular storage systems by examining replica selection algorithms. We introduce GeoPerf, a tool that uses symbolic execution and lightweight modeling to perform systematic testing of replica selection algorithms. We applied GeoPerf to test two popular storage systems and we found one bug in each. Then, using traceroute and one-way delay measurements across EC2, we demonstrated persistent correlation between network paths and network latency. We introduce EdgeVar, a tool that decouples routing and congestion based changes in network latency. By providing this additional information, we improved the quality of latency estimation, as well as increased the stability of network path selection. Finally, we introduce Tectonic, a tool that tracks an application’s requests and responses both at the user and kernel levels. In combination with EdgeVar, it provides a complete view of the delays associated with each processing stage of a request and response. Using Tectonic, we analyzed the impact of sharing CPUs in a virtualized environment and can infer the hypervisor’s scheduling policies. We argue for the importance of knowing these policies and propose to use them in applications’ decision making process. / Databehandlingstjänster är en välintegrerad del av det moderna samhället. Miljontals människor förlitar sig dagligen på dessa tjänster för kommunikation, samordning, handel, och åtkomst till information. För att möta höga krav implementeras och placeras många populära tjänster som geo-fördelning applikationer ovanpå tredje parters virtuella molntjänster. Det ligger emellertid i sakens natur att sådana utplaceringar resulterar i varierande prestanda. För att leverera höga servicekvalitetskrav behöver sådana system sträva efter att ständigt anpassa sig efter ändrade förutsättningar genom att övervaka tillståndsändringar och ta realtidsbeslut, som till exempel val av server peering, replika placering, och val av kvorum. Den här avhandlingen avser att förbättra kvaliteten på realtidsbeslut tagna av geo-fördelning system. Detta kan uppnås genom: (1) en bättre förståelse av underliggande utplaceringsvillkor, (2) systematisk och noggrann testning av beslutslogik redan implementerad i dessa system, och (3) en tydlig inblick i nätverket och systemtillstånd som tillåter dessa tjänster att utföra mer informerade beslut. Vi utförde en långsiktig korsa datacenter latensmätning av Amazons EC2 molntjänst. Mätdata användes sedan till att kvantifiera variationen av nätverkstillstånd och demonstrera dess inverkan på prestanda för system placerade ovanpå denna molntjänst. Därnäst validerades en applikations beslutslogik vanlig i populära lagringssystem genom att undersöka replika valalgoritmen. GeoPerf, ett verktyg som tillämpar symbolisk exekvering och lättviktsmodellering för systematisk testning av replika valalgoritmen, användes för att testa två populära lagringssystem och vi hittade en bugg i båda. Genom traceroute och envägslatensmätningar över EC2 demonstrerar vi ihängande korrelation mellan nätverksvägar och nätverkslatens. Vi introducerar också EdgeVar, ett verktyg som frikopplar dirigering och trängsel baserat på förändringar i nätverkslatens. Genom att tillhandahålla denna ytterligare information förbättrade vi kvaliteten på latensuppskattningen och stabiliteten på nätverkets val av väg. Slutligen introducerade vi Tectonic, ett verktyg som följer en applikations begäran och gensvar på både användare-läge och kernel-läge. Tillsammans med EdgeVar förses en komplett bild av fördröjningar associerade med varje beräkningssteg av begäran och gensvar. Med Tectonic kunde vi analysera inverkan av att dela CPUer i en virtuell miljö och kan avslöja hypervisor schemaläggningsprinciper. Vi argumenterar för betydelsen av att känna till dessa principer och föreslå användningen av de i beslutsprocessen. / <p>QC 20161101</p>
4

Analogy-based software project effort estimation : contributions to projects similarity measurement, attribute selection and attribute weighting algorithms for analogy-based effort estimation

Azzeh, Mohammad Y. A. January 2010 (has links)
Software effort estimation by analogy is a viable alternative method to other estimation techniques, and in many cases, researchers found it outperformed other estimation methods in terms of accuracy and practitioners' acceptance. However, the overall performance of analogy based estimation depends on two major factors: similarity measure and attribute selection & weighting. Current similarity measures such as nearest neighborhood techniques have been criticized that have some inadequacies related to attributes relevancy, noise and uncertainty in addition to the problem of using categorical attributes. This research focuses on improving the efficiency and flexibility of analogy-based estimation to overcome the abovementioned inadequacies. Particularly, this thesis proposes two new approaches to model and handle uncertainty in similarity measurement method and most importantly to reflect the structure of dataset on similarity measurement using Fuzzy modeling based Fuzzy C-means algorithm. The first proposed approach called Fuzzy Grey Relational Analysis method employs combined techniques of Fuzzy set theory and Grey Relational Analysis to improve local and global similarity measure and tolerate imprecision associated with using different data types (Continuous and Categorical). The second proposed approach presents the use of Fuzzy numbers and its concepts to develop a practical yet efficient approach to support analogy-based systems especially at early phase of software development. Specifically, we propose a new similarity measure and adaptation technique based on Fuzzy numbers. We also propose a new attribute subset selection algorithm and attribute weighting technique based on the hypothesis of analogy-based estimation that assumes projects that are similar in terms of attribute value are also similar in terms of effort values, using row-wise Kendall rank correlation between similarity matrix based project effort values and similarity matrix based project attribute values. A literature review of related software engineering studies revealed that the existing attribute selection techniques (such as brute-force, heuristic algorithms) are restricted to the choice of performance indicators such as (Mean of Magnitude Relative Error and Prediction Performance Indicator) and computationally far more intensive. The proposed algorithms provide sound statistical basis and justification for their procedures. The performance figures of the proposed approaches have been evaluated using real industrial datasets. Results and conclusions from a series of comparative studies with conventional estimation by analogy approach using the available datasets are presented. The studies were also carried out to statistically investigate the significant differences between predictions generated by our approaches and those generated by the most popular techniques such as: conventional analogy estimation, neural network and stepwise regression. The results and conclusions indicate that the two proposed approaches have potential to deliver comparable, if not better, accuracy than the compared techniques. The results also found that Grey Relational Analysis tolerates the uncertainty associated with using different data types. As well as the original contributions within the thesis, a number of directions for further research are presented. Most chapters in this thesis have been disseminated in international journals and highly refereed conference proceedings.
5

Incorporating the effect of delay variability in path based delay testing

Tayade, Rajeshwary G. 19 October 2009 (has links)
Delay variability poses a formidable challenge in both design and test of nanometer circuits. While process parameter variability is increasing with technology scaling, as circuits are becoming more complex, the dynamic or vector dependent variability is also increasing steadily. In this research, we develop solutions to incorporate the effect of delay variability in delay testing. We focus on two different applications of delay testing. In the first case, delay testing is used for testing the timing performance of a circuit using path based fault models. We show that if dynamic delay variability is not accounted for during the path selection phase, then it can result in targeting a wrong set of paths for test. We have developed efficient techniques to model the effect of two different dynamic effects namely multiple-input switching noise and coupling noise. The basic strategy to incorporate the effect of dynamic delay variability is to estimate the maximum vector delay of a path without being too pessimistic. In the second case, the objective was to increase the defect coverage of reliability defects in the presence of process variations. Such defects cause very small delay changes and hence can easily escape regular tests. We develop a circuit that facilitates accurate control over the capture edge and thus enable faster than at-speed testing. We further develop an efficient path selection algorithm that can select a path that detects the smallest detectable defect at any node in the presence of process variations. / text
6

Teoria e a prática de um teste adaptativo informatizado / Theory and practice of computerized adaptive testing

Sassi, Gilberto Pereira 10 April 2012 (has links)
O objetivo deste trabalho é apresentar os conceitos relacionados a Teste Adaptativo Informatizado, ou abreviadamente TAI, para o modelo logístico unidimensional da Teoria de Resposta ao Item. Utilizamos a abordagem bayesiana para a estimação do parâmetro de interesse, chamado de traço latente ou habilidade. Apresentamos os principais algoritmos de seleção de itens em TAI e realizamos estudos de simulação para comparar o desempenho deles. Para comparação, usamos aproximações numéricas para o Erro Quadrático Médio e para o Vício e também calculamos o tempo médio para o TAI selecionar um item. Além disso, apresentamos como instalar e usar a implementação de TAI desenvolvida neste projeto chamada de TAI2U, que foi desenvolvido no VBA-Excel usando uma interface com o R / The main of this work is to introduce the subjects related to Computerized Adaptive Testing, or breafly CAT, for the unidimensional three-parameter logistic model of Item Response Theory. We use bayesian approach to estimate the parameter of interest. We present several item selection algorithms and we perform simulations comparing them. The comparisons are made in terms of the mean square error, bias of the trait estimates, the average time for item selection and the average length of test. Furthermore, we show how to install e use the CAT implementation of this work called built in MIcrosoft Excel - VBA using interface with the statistical package R
7

Etude de l'activité neuronale : optimisation du temps de simulation et stabilité des modèles / Study of neuronal activity : optimization of simulation time and stability of models

Sarmis, Merdan 04 December 2013 (has links)
Les neurosciences computationnelles consistent en l’étude du système nerveux par la modélisation et la simulation. Plus le modèle sera proche de la réalité et plus les ressources calculatoires exigées seront importantes. La question de la complexité et de la précision est un problème bien connu dans la simulation. Les travaux de recherche menés dans le cadre de cette thèse visent à améliorer la simulation de modèles mathématiques représentant le comportement physique et chimique de récepteurs synaptiques. Les modèles sont décrits par des équations différentielles ordinaires (EDO), et leur résolution passe par des méthodes numériques. Dans le but d’optimiser la simulation, j’ai implémenté différentes méthodes de résolution numérique des EDO. Afin de faciliter la sélection du meilleur algorithme de résolution numérique, une méthode nécessitant un minimum d’information a été proposée. Cette méthode permet de choisir l’algorithme qui optimise la simulation. La méthode a permis de démontrer que la dynamique d’un modèle de récepteur synaptique influence plus les performances des algorithmes de résolution que la structure cinétique du modèle lui-même. De plus, afin de caractériser des comportements pathogènes, une phase d’optimisation est réalisée. Cependant, certaines valeurs de paramètres rendent le modèle instable. Une étude de stabilité a permis de déterminer la stabilité du modèle pour des paramètres fournis par la littérature, mais également de remonter à des contraintes de stabilité sur les paramètres. Le respect de ces contraintes permet de garantir la stabilité des modèles étudiés, et donc de garantir le succès de la procédure permettant de rendre un modèle pathogène. / Computational Neuroscience consists in studying the nervous system through modeling and simulation. It is to characterize the laws of biology by using mathematical models integrating all known experimental data. From a practical point of view, the more realistic the model, the largest the required computational resources. The issue of complexity and accuracy is a well known problem in the modeling and identification of models. The research conducted in this thesis aims at improving the simulation of mathematical models representing the physical and chemical behavior of synaptic receptors. Models of synaptic receptors are described by ordinary differential equations (ODE), and are resolved with numerical procedures. In order to optimize the performance of the simulations, I have implemented various ODE numerical resolution methods. To facilitate the selection of the best solver, a method, requiring a minimum amount of information, has been proposed. This method allows choosing the best solver in order to optimize the simulation. The method demonstrates that the dynamic of a model has greater influence on the solver performances than the kinetic scheme of the model. In addition, to characterize pathogenic behavior, a parameter optimization is performed. However, some parameter values lead to unstable models. A stability study allowed for determining the stability of the models with parameters provided by the literature, but also to trace the stability constraints depending to these parameters. Compliance with these constraints ensures the stability of the models studied during the optimization phase, and therefore the success of the procedure to study pathogen models.
8

Teoria e a prática de um teste adaptativo informatizado / Theory and practice of computerized adaptive testing

Gilberto Pereira Sassi 10 April 2012 (has links)
O objetivo deste trabalho é apresentar os conceitos relacionados a Teste Adaptativo Informatizado, ou abreviadamente TAI, para o modelo logístico unidimensional da Teoria de Resposta ao Item. Utilizamos a abordagem bayesiana para a estimação do parâmetro de interesse, chamado de traço latente ou habilidade. Apresentamos os principais algoritmos de seleção de itens em TAI e realizamos estudos de simulação para comparar o desempenho deles. Para comparação, usamos aproximações numéricas para o Erro Quadrático Médio e para o Vício e também calculamos o tempo médio para o TAI selecionar um item. Além disso, apresentamos como instalar e usar a implementação de TAI desenvolvida neste projeto chamada de TAI2U, que foi desenvolvido no VBA-Excel usando uma interface com o R / The main of this work is to introduce the subjects related to Computerized Adaptive Testing, or breafly CAT, for the unidimensional three-parameter logistic model of Item Response Theory. We use bayesian approach to estimate the parameter of interest. We present several item selection algorithms and we perform simulations comparing them. The comparisons are made in terms of the mean square error, bias of the trait estimates, the average time for item selection and the average length of test. Furthermore, we show how to install e use the CAT implementation of this work called built in MIcrosoft Excel - VBA using interface with the statistical package R
9

Improving the Robustness of Over-the-Air Synchronization for 5G Networks in a Multipath Environment / Förbättring av robustheten av trådlös synkronisering för 5G-nätverk i en flervägsmiljö

Erninger, Anders January 2023 (has links)
Synchronization between base stations is a fundamental part of any operating telecommunication system. With 5G and future generations of mobile networks, the data speeds are getting higher, which creates the need for fast and accurate synchronization. In wireless systems, the transmitted signals are affected by the environment. Both moving and stationary objects can cause a transmitted signal to be scattered or reflected, causing the receiver to receive multiple instances of one signal. If a synchronization signal is transmitted from one base station and received in multiple instances by another, it is hard for the receiving base station to know which of the received instances that should be used for calculating the synchronization error between the base stations. In this thesis, multiple different algorithms for selecting a synchronization signal pair between two base stations to be used for calculating time alignment error have been tested. The results have been evaluated based on their accuracy of selecting a correct matching signal pair. It is shown that the proposed algorithms in this thesis all perform significantly better than the method currently in use. Further, the advantages and disadvantages of each of the new algorithms are discussed, and finally new concepts for future studies are suggested. / Synkronisering mellan basstationer är en fundamental del av ett fungerande telekommunikationssystem. Med 5G och framtida generationer av mobila nätverk så ökas datahastigheter, vilket skapar behovet av en snabb och precis synkronisering. I trådlösa system påverkas skickade signaler av dess omgivning. Både stationära och icke-stationära objekt i omgivningen kan splittra eller reflektera signaler, vilket ger upphov till en flervägskanal. Detta gör att en mottagare kan ta emot flera instanser av en skickad signal. Om en synkroniseringssignal skickas från en basstation via en flervägskanal till en mottagande basstation, så kommer mottagaren att ta emot flera instanser av den skickade signalen vid olika tidpunkter. Det kan då vara svårt för mottagaren att avgöra vilken av de mottagna signalerna som ska användas vid beräkning av tidsfelet mellan basstationerna. I detta examensarbete testas ett flertal olika algoritmer för att välja vilket synkroniseringssignalpar som ska användas vid beräkning av tidsfelet mellan två basstationer. Resultatet utvärderas baserat på hur hög precision algoritmen har i att välja ett korrekt matchat synkroniseringssignalpar. Resultatet visar att de algoritmer som presenteras i denna uppsats presterar märkbart bättre än den algoritm som används i systemen just nu. Vidare diskuteras fördelar och nackdelar med de olika algoritmerna och förslag på vidareutveckling av algoritmerna läggs fram.
10

[en] ANTENNA SELECTION IN THE DOWNLINK OF PRECODED MULTIUSER MIMO SYSTEMS / [pt] SELEÇÃO DE ANTENAS NO ENLACE DIRETO DE SISTEMAS MIMO MULTIUSUARIO COM PRÉ-CODIFICAÇÃO

DAILYS ARRONDE PEREZ 11 January 2019 (has links)
[pt] Esta dissertação enfoca o enlace direto de sistemas MIMO multiusuário com pré-codificação onde a estação base e os terminais dos usuários possuem múltiplas antenas mas transmitem e recebem, respectivamente, símbolos de informação através de subconjuntos selecionados de seus conjuntos de antenas. O trabalho considera sistemas que utilizam técnicas de précodificação linear como Zero Forcing (ZF) e Minimum Mean Square Error (MMSE). Expressões gerais que descrevem os sistemas e relacionam a energia gasta na transmissão com a energia disponível para a detecção em cada usuário são apresentadas. Com base nestas relações, um procedimento para seleção de antenas na transmissão é proposto visando a minimização da probabilidade de erro. Um algoritmo de busca não exaustiva denominado ITES (Iterative Search) foi desenvolvido e testado e mostrou-se capaz de, com apenas uma pequena fração do esforço computacional, fornecer um desempenho próximo ao da seleção ótima, que demanda uma busca exaustiva. A seleção de antenas na recepção é também efetuada usando um critério de otimização semelhante. O caso geral da seleção conjunta de antenas na transmissão e na recepção contempla a combinação de ambas estratégias, resultando na redução da complexidade tanto na estação base, quanto nos terminais dos usuários. Os resultados de desempenho em termos da taxa de erro de bit, obtidos por meio de simulações e abordagem semianalítica, são apresentados para diferentes cenários. / [en] This thesis focuses on the downlink of a multiuser multiple-input multiple-output (MU-MIMO) systems where the Base Station (BS) and the users stations (UEs) transmit and receive information symbols, respectively, by selected subset of their antennas. The performance of the system is evaluated employing linear precoding techniques as Zero Forcing (ZF) and Minimum Mean Square Error (MMSE). A general model to describe the system and expressions that relate the energy spent in transmission with the energy available for detection at each user are presented. A transmit antenna selection procedure is proposed aiming at the minimization of the detection error probability. A suboptimal search algorithm, called ITES (Iterative Search), able to deliver a performance close to the one resulting from the optimal exhaustive search selection is also proposed. The receive antenna selection is also performed using a similar optimization criterion. Joint antennas selection at the transmitter and receiver contemplates the efficient combination of both strategies, leading to a complexity reduction in BS and UEs. BER performance results, obtained via simulation and semi-analytical approaches, are presented for different scenarios.

Page generated in 0.092 seconds