• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 8
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 44
  • 12
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

REORDERING PACKET BASED DATA IN REAL-TIME DATA ACQUISITION SYSTEMS

Kilpatrick, Stephen, Rasche, Galen, Cunningham, Chris, Moodie, Myron, Abbott, Ben 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Ubiquitous internet protocol (IP) hardware has reached performance and capability levels that allow its use in data collection and real-time processing applications. Recent development experience with IP-based airborne data acquisition systems has shown that the open, pre-existing IP tools, standards, and capabilities support this form of distribution and sharing of data quite nicely, especially when combined with IP multicast. Unfortunately, the packet based nature of our approach also posed some problems that required special handling to achieve performance requirements. We have developed methods and algorithms for the filtering, selecting, and retiming problems associated with packet-based systems and present our approach in this paper.
2

Unification-based constraints for statistical machine translation

Williams, Philip James January 2014 (has links)
Morphology and syntax have both received attention in statistical machine translation research, but they are usually treated independently and the historical emphasis on translation into English has meant that many morphosyntactic issues remain under-researched. Languages with richer morphologies pose additional problems and conventional approaches tend to perform poorly when either source or target language has rich morphology. In both computational and theoretical linguistics, feature structures together with the associated operation of unification have proven a powerful tool for modelling many morphosyntactic aspects of natural language. In this thesis, we propose a framework that extends a state-of-the-art syntax-based model with a feature structure lexicon and unification-based constraints on the target-side of the synchronous grammar. Whilst our framework is language-independent, we focus on problems in the translation of English to German, a language pair that has a high degree of syntactic reordering and rich target-side morphology. We first apply our approach to modelling agreement and case government phenomena. We use the lexicon to link surface form words with grammatical feature values, such as case, gender, and number, and we use constraints to enforce feature value identity for the words in agreement and government relations. We demonstrate improvements in translation quality of up to 0.5 BLEU over a strong baseline model. We then examine verbal complex production, another aspect of translation that requires the coordination of linguistic features over multiple words, often with long-range discontinuities. We develop a feature structure representation of verbal complex types, using constraint failure as an indicator of translation error and use this to automatically identify and quantify errors that occur in our baseline system. A manual analysis and classification of errors informs an extended version of the model that incorporates information derived from a parse of the source. We identify clause spans and use model features to encourage the generation of complete verbal complex types. We are able to improve accuracy as measured using precision and recall against values extracted from the reference test sets. Our framework allows for the incorporation of rich linguistic information and we present sketches of further applications that could be explored in future work.
3

Congestion control algorithms of TCP in emerging networks

Bhandarkar, Sumitha 02 June 2009 (has links)
In this dissertation we examine some of the challenges faced by the congestion control algorithms of TCP in emerging networks. We focus on three main issues. First, we propose TCP with delayed congestion response (TCP-DCR), for improving performance in the presence of non-congestion events. TCP-DCR delays the conges- tion response for a short interval of time, allowing local recovery mechanisms to handle the event, if possible. If at the end of the delay, the event persists, it is treated as congestion loss. We evaluate TCP-DCR through analysis and simulations. Results show significant performance improvements in the presence of non-congestion events with marginal impact in their absence. TCP-DCR maintains fairness with standard TCP variants that respond immediately. Second, we propose Layered TCP (LTCP), which modifies a TCP flow to behave as a collection of virtual flows (or layers), to improve eficiency in high-speed networks. The number of layers is determined by dynamic network conditions. Convergence properties and RTT-unfairness are maintained similar to that of TCP. We provide the intuition and the design for the LTCP protocol and evaluation results based on both simulations and Linux implementation. Results show that LTCP is about an order of magnitude faster than TCP in utilizing high bandwidth links while maintaining promising convergence properties. Third, we study the feasibility of employing congestion avoidance algorithms in TCP. We show that end-host based congestion prediction is more accurate than previously characterized. However, uncertainties in congestion prediction may be un- avoidable. To address these uncertainties, we propose an end-host based mechanism called Probabilistic Early Response TCP (PERT). PERT emulates the probabilistic response function of the router-based scheme RED/ECN in the congestion response function of the end-host. We show through extensive simulations that, similar to router-based RED/ECN, PERT provides fair bandwidth sharing with low queuing delays and negligible packet losses, without requiring the router support. It exhibits better characteristics than TCP-Vegas, the illustrative end-host scheme. PERT can also be used for emulating other router schemes. We illustrate this through prelim- inary results for emulating the router-based mechanism REM/ECN. Finally, we show the interactions and benefits of combining the different proposed mechanisms.
4

An approach for code generation in the Sparse Polyhedral Framework

Strout, Michelle Mills, LaMielle, Alan, Carter, Larry, Ferrante, Jeanne, Kreaseck, Barbara, Olschanowsky, Catherine 04 1900 (has links)
Applications that manipulate sparse data structures contain memory reference patterns that are unknown at compile time due to indirect accesses such as A[B[i]]. To exploit parallelism and improve locality in such applications, prior work has developed a number of Run-Time Reordering Transformations (RTRTs). This paper presents the Sparse Polyhedral Framework (SPF) for specifying RTRTs and compositions thereof and algorithms for automatically generating efficient inspector and executor code to implement such transformations. Experimental results indicate that the performance of automatically generated inspectors and executors competes with the performance of hand-written ones when further optimization is done.
5

Variations on the Theme of Caching

Gaspar, Cristian January 2005 (has links)
This thesis is concerned with caching algorithms. We investigate three variations of the caching problem: web caching in the Torng framework, relative competitiveness and caching with request reordering. <br /><br /> In the first variation we define different cost models involving page sizes and page costs. We also present the Torng cost framework introduced by Torng in [29]. Next we analyze the competitive ratio of online deterministic marking algorithms in the BIT cost model combined with the Torng framework. We show that given some specific restrictions on the set of possible request sequences, any marking algorithm is 2-competitive. <br /><br /> The second variation consists in using the relative competitiveness ratio on an access graph as a complexity measure. We use the concept of access graphs introduced by Borodin [11] to define our own concept of relative competitive ratio. We demonstrate results regarding the relative competitiveness of two cache eviction policies in both the basic and the Torng framework combined with the CLASSICAL cost model. <br /><br /> The third variation is caching with request reordering. Two reordering models are defined. We prove some important results about the value of a move and number of orderings, then demonstrate results about the approximation factor and competitive ratio of offline and online reordering schemes, respectively.
6

Variations on the Theme of Caching

Gaspar, Cristian January 2005 (has links)
This thesis is concerned with caching algorithms. We investigate three variations of the caching problem: web caching in the Torng framework, relative competitiveness and caching with request reordering. <br /><br /> In the first variation we define different cost models involving page sizes and page costs. We also present the Torng cost framework introduced by Torng in [29]. Next we analyze the competitive ratio of online deterministic marking algorithms in the BIT cost model combined with the Torng framework. We show that given some specific restrictions on the set of possible request sequences, any marking algorithm is 2-competitive. <br /><br /> The second variation consists in using the relative competitiveness ratio on an access graph as a complexity measure. We use the concept of access graphs introduced by Borodin [11] to define our own concept of relative competitive ratio. We demonstrate results regarding the relative competitiveness of two cache eviction policies in both the basic and the Torng framework combined with the CLASSICAL cost model. <br /><br /> The third variation is caching with request reordering. Two reordering models are defined. We prove some important results about the value of a move and number of orderings, then demonstrate results about the approximation factor and competitive ratio of offline and online reordering schemes, respectively.
7

Impact of Out-of-Order Delivery in DiffServ Networks

Jheng, Bo-Wun 14 September 2006 (has links)
Packet reordering is generally considered to have negative impact on network performance. In this thesis, the packet reordering is used to assist TCP to recover faster in RED-enabled packet switched networks. The RED queue management prevents networks from congestion by dropping packets with a probability earlier than the time when congestion would actually occur. After a RED router drops a packet, packer reordering is introduced during TCP¡¦s recovery process. A new, simple buffer mechanism, called RED with Recovery Queue or R2Q, is proposed to create this type of packet reordering on behalf of TCP with the objective of accelerating TCP¡¦s recovery and thus improving the overall network performance. In R2Q, the original RED queue is segmented into two sub-queues. The first sub-queue remains the function of the original RED while the second picks up the packets discarded by the first. Then, scheduling of the second-chance transmission of the packets in the secondary sub-queue is the key in achieving our objective. In this thesis, we considered two scheduling schemes: priority and weighted round robin. To evaluate the performance of R2Q with these two scheduling schemes, we implemented and evaluated them in the J-Sim network simulation environment. The well-known dumbbell network topology was adopted and we varied different parameters, such as round-trip time, bottleneck bandwidth, buffer size, WRR weight and so on, in order to understand how R2Q performs under different network configurations. We found that R2Q is more effective in the networks of sufficient buffer and larger product of RTT and bandwidth. With WRR, we may achieve as much as 2% improvement over the original RED. The improvement may be more in networks of even higher speed.
8

Performance oriented scheduling with power constraints

Hayes, Brian C 01 June 2005 (has links)
Current technology trends continue to increase the power density of modern processors at an exponential rate. The increasing transistor density has significantly impacted cooling and power requirements and if left unchecked, the power barrier will adverselyaffect performance gains in the near future. In this work, we investigate the problem ofinstruction reordering for improving both performance and power requirements. Recently,a new scheduling technique, called Forced Directed Instruction Scheduling, or FDIS, hasbeen proposed in the literature for use in high level synthesis as well as instruction reordering [15, 16, 6]. This thesis extends the FDIS algorithm by adding several features suchas control instruction handling, register renaming in order to obtain better performanceand power reduction. Experimental results indicate that performance improvements up to24.62% and power reduction up to 23.98% are obtained on a selected set of benchmark programs.
9

Reordenação de matrizes de dados quantitativos usando árvores PQR / Using PQR trees for quantitative data matrix reordering

Medina, Bruno Figueiredo, 1990- 27 August 2018 (has links)
Orientador: Celmar Guimarães da Silva / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-27T09:05:54Z (GMT). No. of bitstreams: 1 Medina_BrunoFigueiredo_M.pdf: 3649814 bytes, checksum: ab007f9e22f1dab1394a99e9b4d80b4f (MD5) Previous issue date: 2015 / Resumo: Matrizes são estruturas subjacentes a diferentes tipos de visualização de dados, como por exemplo, heatmaps. Diferentes algoritmos possibilitam uma permutação automática de suas linhas e colunas para prover um melhor entendimento visual, procurando agrupar linhas e colunas similares e evidenciar padrões. Trabalhos anteriores testaram e compararam alguns desses algoritmos em matrizes de dados binários, obtendo bons resultados o algoritmo PQR-Sort with Sorted Restrictions, em termos de tempo de execução e qualidade da reordenação em alguns tipos de matrizes. Contudo, este algoritmo não foi estendido para trabalhar com matrizes de dados quantitativos. Dessa forma, como continuidade desses trabalhos, este projeto testa a hipótese de que é possível elaborar variações do algoritmo PQR-Sort with Sorted Restrictions capazes de reordenar matrizes de dados quantitativos, e cuja eficiência de tempo e de qualidade da reordenação supere algoritmos de mesmo propósito. Neste projeto, foram elaborados os algoritmos Smoothed Multiple Binarization (SMB) e Multiple Binarization (MB). Ambos utilizam criação de vetores característicos (para descoberta de padrões canônicos de dados), árvores PQR e binarização de matrizes para sua reordenação. O SMB possui um potencial para prover boas reordernações de matrizes que contenham ruídos, pois faz o tratamento destes ruídos no conjunto de dados. Esses algoritmos foram testados e comparados com o Multidimensional Scaling (MDS) e algoritmo de Sugiyama adaptado (heurística baricêntrica), em termos de qualidade de reordenação e tempo de execução sobre matrizes sintéticas com os padrões canônicos Simplex, Band, Circumplex e Equi. Os resultados obtidos indicaram que os algoritmos SMB e MB destacaram-se dentre os demais pela capacidade de evidenciação do padrão Circumplex, e trouxeram resultados similares aos dos algoritmos testados para os padrões Equi e Band. Os resultados também indicam que SMB e MB são, em média, 3 e 6 vezes mais rápidos que o MDS, respectivamente. Deste modo, o uso de SMB e MB torna-se atrativo para a reordenação de matrizes que evidenciem padrões Circumplex, Equi e Band / Abstract: Matrices are structures underlying different types of data visualization, as heatmap. Different algorithms enable automatic permutation of their rows and columns to provide a better visual understanding, aiming to group similar rows and columns and show patterns. Earlier work tested and compared some of these algorithms on binary data matrices, and revealed that PQR-Sort with Sorted Restrictions algorithm returned good results in terms of runtime and quality of reordered matrix. However, this algorithm was not extended for quantitative data matrices. Thus, as a continuation of these studies, this project aims to test the hypothesis that it is possible to develop variations of the PQR-Sort with Sorted Restrictions algorithm able to reorder quantitative data matrices, and whose quality of results and time efficiency surpasses algorithms that have the same purpose. In this work, it was elaborated the Smoothed Multiple Binarization (SMB) and Multiple Binarization (MB). Both use feature selection (to discovering canonical pattern of data), PQR Tree and binary matrices for their reordering. SMB algorithm has a potential to provide good matrices reordering with noise, because it does the noise treatment in data sets. These algorithms were tested and compared with Multidimensional Scaling (MDS) and Adapted Sugiyama (or Barycentric Heuristic) algorithms, in terms of quality of reordering and runtime on synthetics matrices with the canonical patterns Simplex, Band, Circumplex and Equi. The results indicated that SMB and MB algorithms stood out from the others by capacity of highlight Circumplex pattern, besides showing that may to obtain similar results to MDS and Adapted Sugiyama for Equi and Band patterns. Furthermore, SMB and MB were, on average, 3 and 6 times faster than MDS, respectively. Thus, the use of the SMB and MB algorithms can be attractive for matrices reordering that evidence Circumplex, Equi and Band patterns / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia
10

Algorithms and Library Software for Periodic and Parallel Eigenvalue Reordering and Sylvester-Type Matrix Equations with Condition Estimation

Granat, Robert January 2007 (has links)
This Thesis contains contributions in two different but closely related subfields of Scientific and Parallel Computing which arise in the context of various eigenvalue problems: periodic and parallel eigenvalue reordering and parallel algorithms for Sylvestertype matrix equations with applications in condition estimation. Many real world phenomena behave periodically, e.g., helicopter rotors, revolving satellites and dynamic systems corresponding to natural processes, like the water flow in a system of connected lakes, and can be described in terms of periodic eigenvalue problems. Typically, eigenvalues and invariant subspaces (or, specifically, eigenvectors) to certain periodic matrix products are of interest and have direct physical interpretations. The eigenvalues of a matrix product can be computed without forming the product explicitly via variants of the periodic Schur decomposition. In the first part of the Thesis, we propose direct methods for eigenvalue reordering in the periodic standard and generalized real Schur forms which extend earlier work on the standard and generalized eigenvalue problems. The core step of the methods consists of solving periodic Sylvester-type equations to high accuracy. Periodic eigenvalue reordering is vital in the computation of periodic eigenspaces corresponding to specified spectra. The proposed direct reordering methods rely on orthogonal transformations and can be generalized to more general periodic matrix products where the factors have varying dimensions and ±1 exponents of arbitrary order. In the second part, we consider Sylvester-type matrix equations, like the continuoustime Sylvester equation AX −XB =C, where A of size m×m, B of size n×n, and C of size m×n are general matrices with real entries, which have applications in many areas. Examples include eigenvalue problems and condition estimation, and several problems in control system design and analysis. The parallel algorithms presented are based on the well-known Bartels–Stewart’s method and extend earlier work on triangular Sylvester-type matrix equations resulting in a novel software library SCASY. The parallel library provides robust and scalable software for solving 44 sign and transpose variants of eight common Sylvester-type matrix equations. SCASY also includes a parallel condition estimator associated with each matrix equation. In the last part of the Thesis, we propose parallel variants of the direct eigenvalue reordering method for the standard and generalized real Schur forms. Together with the existing and future parallel implementations of the non-symmetric QR/QZ algorithms and the parallel Sylvester solvers presented in the Thesis, the developed software can be used for parallel computation of invariant and deflating subspaces corresponding to specified spectra and associated reciprocal condition number estimates.

Page generated in 0.0852 seconds