• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 34
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evaluation of Machine Learning Algorithms to Reduce Paging Signalling in a Telecom Network

Larsson, Fredrik, Karlsson, Albert January 2017 (has links)
In a telecommunications network locating user equipment (paging) is a common procedure. Proposed functionality for 4G and 5G allows for eNB initiated paging via X2 interfaces. In this thesis machine learning algorithms were evaluated in order to reduce page signalling. Additionally, two paging schemes based on machine learning were proposed and compared to a common method of paging through cost models. The results show that signalling cost can be reduced by up to 80%.
12

A STUDY OF CLUSTER PAGING METHODS TO BOOST VIRTUAL MEMORY PERFORMANCE

RAMAN, VENKATESH 11 March 2002 (has links)
No description available.
13

Uso de técnicas e informações em algoritmos adaptativos para substituição de páginas. / Use of technics and information on adaptive page replacement algorithms.

Silva, Ricardo Leandro Piantola da 19 March 2010 (has links)
O desempenho do sistema de memória virtual depende diretamente da qualidade da política de gerência de memória. Estratégias podem ser desenvolvidas para melhorar tal desempenho: uma delas é criar novas políticas de gerência de memória que tenham, ao mesmo tempo, bom desempenho e simplicidade; outra maneira é desenvolver técnicas e incluir informações para auxiliar as políticas já existentes. Este trabalho procura mostrar uma estratégia para auxiliar políticas de substituição com a finalidade de obter bom desempenho em um sistema de gerência de memória, sem a necessidade de alterar o comportamento da política de substituição. Para isso, foi utilizada a técnica de busca antecipada de páginas em conjunto com a informação de frequência de acessos, obtida por meio de um método usado em processamento estatístico de linguagem natural. Os resultados mostram, além do bom desempenho, que a mesma estratégia pode ser adotada em qualquer algoritmo. / The virtual memory system performance depends directly on the quality of the memory management policy. Strategies can be developed to improve such performance: one of them is creating new memory management policies that present, at the same time, simplicity and good performance; another one is developing techniques and include information that will aid the policies that already exist. This paper aims to show a strategy that will aid replacement policies in order to obtain a good performance in a memory management system without changing the replacement policy behavior. To do so, a page prefetching technique along with information about access frequency, obtained through a method used in a statistical natural language processing, was used. The results show, besides the good performance, that the same strategy can be adopted in any algorithm.
14

Uso de técnicas e informações em algoritmos adaptativos para substituição de páginas. / Use of technics and information on adaptive page replacement algorithms.

Ricardo Leandro Piantola da Silva 19 March 2010 (has links)
O desempenho do sistema de memória virtual depende diretamente da qualidade da política de gerência de memória. Estratégias podem ser desenvolvidas para melhorar tal desempenho: uma delas é criar novas políticas de gerência de memória que tenham, ao mesmo tempo, bom desempenho e simplicidade; outra maneira é desenvolver técnicas e incluir informações para auxiliar as políticas já existentes. Este trabalho procura mostrar uma estratégia para auxiliar políticas de substituição com a finalidade de obter bom desempenho em um sistema de gerência de memória, sem a necessidade de alterar o comportamento da política de substituição. Para isso, foi utilizada a técnica de busca antecipada de páginas em conjunto com a informação de frequência de acessos, obtida por meio de um método usado em processamento estatístico de linguagem natural. Os resultados mostram, além do bom desempenho, que a mesma estratégia pode ser adotada em qualquer algoritmo. / The virtual memory system performance depends directly on the quality of the memory management policy. Strategies can be developed to improve such performance: one of them is creating new memory management policies that present, at the same time, simplicity and good performance; another one is developing techniques and include information that will aid the policies that already exist. This paper aims to show a strategy that will aid replacement policies in order to obtain a good performance in a memory management system without changing the replacement policy behavior. To do so, a page prefetching technique along with information about access frequency, obtained through a method used in a statistical natural language processing, was used. The results show, besides the good performance, that the same strategy can be adopted in any algorithm.
15

Alternative Measures for the Analysis of Online Algorithms

Dorrigiv, Reza 26 February 2010 (has links)
In this thesis we introduce and evaluate several new models for the analysis of online algorithms. In an online problem, the algorithm does not know the entire input from the beginning; the input is revealed in a sequence of steps. At each step the algorithm should make its decisions based on the past and without any knowledge about the future. Many important real-life problems such as paging and routing are intrinsically online and thus the design and analysis of online algorithms is one of the main research areas in theoretical computer science. Competitive analysis is the standard measure for analysis of online algorithms. It has been applied to many online problems in diverse areas ranging from robot navigation, to network routing, to scheduling, to online graph coloring. While in several instances competitive analysis gives satisfactory results, for certain problems it results in unrealistically pessimistic ratios and/or fails to distinguish between algorithms that have vastly differing performance under any practical characterization. Addressing these shortcomings has been the subject of intense research by many of the best minds in the field. In this thesis, building upon recent advances of others we introduce some new models for analysis of online algorithms, namely Bijective Analysis, Average Analysis, Parameterized Analysis, and Relative Interval Analysis. We show that they lead to good results when applied to paging and list update algorithms. Paging and list update are two well known online problems. Paging is one of the main examples of poor behavior of competitive analysis. We show that LRU is the unique optimal online paging algorithm according to Average Analysis on sequences with locality of reference. Recall that in practice input sequences for paging have high locality of reference. It has been empirically long established that LRU is the best paging algorithm. Yet, Average Analysis is the first model that gives strict separation of LRU from all other online paging algorithms, thus solving a long standing open problem. We prove a similar result for the optimality of MTF for list update on sequences with locality of reference. A technique for the analysis of online algorithms has to be effective to be useful in day-to-day analysis of algorithms. While Bijective and Average Analysis succeed at providing fine separation, their application can be, at times, cumbersome. Thus we apply a parameterized or adaptive analysis framework to online algorithms. We show that this framework is effective, can be applied more easily to a larger family of problems and leads to finer analysis than the competitive ratio. The conceptual innovation of parameterizing the performance of an algorithm by something other than the input size was first introduced over three decades ago [124, 125]. By now it has been extensively studied and understood in the context of adaptive analysis (for problems in P) and parameterized algorithms (for NP-hard problems), yet to our knowledge this thesis is the first systematic application of this technique to the study of online algorithms. Interestingly, competitive analysis can be recast as a particular form of parameterized analysis in which the performance of opt is the parameter. In general, for each problem we can choose the parameter/measure that best reflects the difficulty of the input. We show that in many instances the performance of opt on a sequence is a coarse approximation of the difficulty or complexity of a given input sequence. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results. We test list update algorithms in the context of a data compression problem known to have locality of reference. Our experiments show MTF outperforms other list update algorithms in practice after BWT. This is consistent with the intuition that BWT increases locality of reference.
16

Alternative Measures for the Analysis of Online Algorithms

Dorrigiv, Reza 26 February 2010 (has links)
In this thesis we introduce and evaluate several new models for the analysis of online algorithms. In an online problem, the algorithm does not know the entire input from the beginning; the input is revealed in a sequence of steps. At each step the algorithm should make its decisions based on the past and without any knowledge about the future. Many important real-life problems such as paging and routing are intrinsically online and thus the design and analysis of online algorithms is one of the main research areas in theoretical computer science. Competitive analysis is the standard measure for analysis of online algorithms. It has been applied to many online problems in diverse areas ranging from robot navigation, to network routing, to scheduling, to online graph coloring. While in several instances competitive analysis gives satisfactory results, for certain problems it results in unrealistically pessimistic ratios and/or fails to distinguish between algorithms that have vastly differing performance under any practical characterization. Addressing these shortcomings has been the subject of intense research by many of the best minds in the field. In this thesis, building upon recent advances of others we introduce some new models for analysis of online algorithms, namely Bijective Analysis, Average Analysis, Parameterized Analysis, and Relative Interval Analysis. We show that they lead to good results when applied to paging and list update algorithms. Paging and list update are two well known online problems. Paging is one of the main examples of poor behavior of competitive analysis. We show that LRU is the unique optimal online paging algorithm according to Average Analysis on sequences with locality of reference. Recall that in practice input sequences for paging have high locality of reference. It has been empirically long established that LRU is the best paging algorithm. Yet, Average Analysis is the first model that gives strict separation of LRU from all other online paging algorithms, thus solving a long standing open problem. We prove a similar result for the optimality of MTF for list update on sequences with locality of reference. A technique for the analysis of online algorithms has to be effective to be useful in day-to-day analysis of algorithms. While Bijective and Average Analysis succeed at providing fine separation, their application can be, at times, cumbersome. Thus we apply a parameterized or adaptive analysis framework to online algorithms. We show that this framework is effective, can be applied more easily to a larger family of problems and leads to finer analysis than the competitive ratio. The conceptual innovation of parameterizing the performance of an algorithm by something other than the input size was first introduced over three decades ago [124, 125]. By now it has been extensively studied and understood in the context of adaptive analysis (for problems in P) and parameterized algorithms (for NP-hard problems), yet to our knowledge this thesis is the first systematic application of this technique to the study of online algorithms. Interestingly, competitive analysis can be recast as a particular form of parameterized analysis in which the performance of opt is the parameter. In general, for each problem we can choose the parameter/measure that best reflects the difficulty of the input. We show that in many instances the performance of opt on a sequence is a coarse approximation of the difficulty or complexity of a given input sequence. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results. We test list update algorithms in the context of a data compression problem known to have locality of reference. Our experiments show MTF outperforms other list update algorithms in practice after BWT. This is consistent with the intuition that BWT increases locality of reference.
17

A Dual-Mode Message Delivery System with Time Constrained Paging Mechanism

Cheng, Hsu-Ching 11 September 2012 (has links)
In the thesis, we propose a dual-mode message delivery system with mechanisms of time constrained paging and multi-class message. The pairing decision depends on the effective pairing time defined by the system when a bluetooth device comes into service range. Within the constrained pairing time, central server can deliver a message to the bluetooth device directly without re-pairing. Otherwise, the bluetooth device has to be paired with an intermediate node before it can receive a message. In addition, we store the number of times that bluetooth devices can move into the service range into a data base in order to send multi-class messages to these bluetooth devices. To demonstrate the proposed schemes, we implement a central server on Linux system and intermediate nodes on Window Mobile platform. We also design control packets associated with their message formats. Control messages can be exchanged between the central server and the intermediate node by the control packets, and data messages can be transmitted in a heterogeneous network, consisting of bluetooth and Wi-Fi. Finally, we measured the time saved without using pairing procedure and also verified that the system can dynamically adjust the classes of messages according to the number of times that bluetooth devices enter to the service range.
18

Exploiting Tracking Area List Concept in LTE Networks

Nawaz, Mohsin January 2013 (has links)
Signaling Overhead has always been a concern for network operators. LTE offers many improvements aimed at improved network performance and management. This thesis exploit Tracking Area List (TAL) concept in LTE networks. An algorithm to design TAL using UE traces is developed. The performance of TAL design is compared to conventional TA design. Performance is also compared with rule of thumb TAL design which is another approach to designing TAL
19

Design and Implementation of a Practical FLEX Paging Decoder

McCulley, Scott L. 07 November 1997 (has links)
The Motorola Inc. paging protocol FLEX is discussed. The design and construction of a FLEX paging protocol decoder is discussed in detail. It proposes a decoding solution that includes a radio frequency (RF) receiver and a decoder board. The RF receiver will be briefly discussed. The decoder design is the main focus of this thesis as it transforms the RF frequency modulated (FM) data from the receiver and converts it to FLEX data words. The decoder is designed to handle bit sampling, bit clock synchronization, FLEX packet detection, and FLEX data word collection. The FLEX data words are then sent by the decoder to an external computer through a serial link for bit processing and storage. A FLEX transmitter will send randomly generated data so that a bit error rate (BER) calculation can be made at a PC. Each receiver'9s noise power and noise bandwidth will be measured so that noise spectral density may be calculated. A complete measurement set-up will be shown on how these noise measurements are made. The BER at a known power level is recorded. This enables Eb/No curves to be generated so that results of the decoding algorithm may be compared. This is performed on two different receivers. / Master of Science
20

Models for Parallel Computation in Multi-Core, Heterogeneous, and Ultra Wide-Word Architectures

Salinger, Alejandro January 2013 (has links)
Multi-core processors have become the dominant processor architecture with 2, 4, and 8 cores on a chip being widely available and an increasing number of cores predicted for the future. In addition, the decreasing costs and increasing programmability of Graphic Processing Units (GPUs) have made these an accessible source of parallel processing power in general purpose computing. Among the many research challenges that this scenario has raised are the fundamental problems related to theoretical modeling of computation in these architectures. In this thesis we study several aspects of computation in modern parallel architectures, from modeling of computation in multi-cores and heterogeneous platforms, to multi-core cache management strategies, through the proposal of an architecture that exploits bit-parallelism on thousands of bits. Observing that in practice multi-cores have a small number of cores, we propose a model for low-degree parallelism for these architectures. We argue that assuming a small number of processors (logarithmic in a problem's input size) simplifies the design of parallel algorithms. We show that in this model a large class of divide-and-conquer and dynamic programming algorithms can be parallelized with simple modifications to sequential programs, while achieving optimal parallel speedups. We further explore low-degree-parallelism in computation, providing evidence of fundamental differences in practice and theory between systems with a sublinear and linear number of processors, and suggesting a sharp theoretical gap between the classes of problems that are efficiently parallelizable in each case. Efficient strategies to manage shared caches play a crucial role in multi-core performance. We propose a model for paging in multi-core shared caches, which extends classical paging to a setting in which several threads share the cache. We show that in this setting traditional cache management policies perform poorly, and that any effective strategy must partition the cache among threads, with a partition that adapts dynamically to the demands of each thread. Inspired by the shared cache setting, we introduce the minimum cache usage problem, an extension to classical sequential paging in which algorithms must account for the amount of cache they use. This cache-aware model seeks algorithms with good performance in terms of faults and the amount of cache used, and has applications in energy efficient caching and in shared cache scenarios. The wide availability of GPUs has added to the parallel power of multi-cores, however, most applications underutilize the available resources. We propose a model for hybrid computation in heterogeneous systems with multi-cores and GPU, and describe strategies for generic parallelization and efficient scheduling of a large class of divide-and-conquer algorithms. Lastly, we introduce the Ultra-Wide Word architecture and model, an extension of the word-RAM model, that allows for constant time operations on thousands of bits in parallel. We show that a large class of existing algorithms can be implemented in the Ultra-Wide Word model, achieving speedups comparable to those of multi-threaded computations, while avoiding the more difficult aspects of parallel programming.

Page generated in 0.06 seconds