• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 19
  • 8
  • 7
  • 7
  • 6
  • 2
  • 2
  • 1
  • Tagged with
  • 97
  • 13
  • 13
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Déceler les attaques par détournement BGP / Towards uncovering BGP hijacking attacks

Jacquemart, Quentin 06 October 2015 (has links)
Internet est constitué de milliers de systèmes autonomes (Autonomous Systems, AS) qui échangent des informations de routage grâce au protocole BGP (Border Gateway Protocol). Chaque AS attend des autres qu'il lui donne des informations de routage correctes, et leur accorde donc une confiance totale. Le détournement de préfixe (prefix hijacking) exploite cette confiance afin d'introduire des routes falsifiées. Les techniques qui détectent cette attaque génèrent un nombre important d'alertes, composé de faux positifs résultant d'opérations de routage courantes. Dans cette dissertation, nous cherchons à trouver la cause principale de ces alertes, de manière indubitable. A cette fin, d'une part, nous réduisons le nombre d'alertes en analysant en profondeur ces réseaux, dont nous dérivons une série de structures qui reflètent les pratiques standard de routage du monde réel, et en considérant le risque associé à ces structures lors une attaque par détournement. D'autre part, nous utilisons des bases de données auxiliaires qui nous permettent de connaître la raison derrière un évènement de routage, qui n'est, en général, connue que du propriétaire du réseau. Précisément, nous regardons les préfixes à origines multiples (Multiple Origin AS, MOAS), et mettons en évidence une classification nous permettant d'éliminer 80% des alertes. Nous présentons un cas réel où un MOAS coïncide avec du spam et des sites d'arnaque en ligne. Nous étudions les préfixes non-disjoints, et présentons un prototype permettant d'éliminer 50% des alertes sub-MOAS.Nous explorons l'espace IP non assigné, cherchons des adresses IP joignables, et localisons une grande quantité de spam et des sites d'arnaques en ligne. / The Internet is composed of tens of thousands Autonomous Systems (ASes) that exchange routing information using the Border Gateway Protocol (BGP). Consequently, every AS implicitly trusts every other ASes to provide accurate routing information. Prefix hijacking is an attack against the inter-domain routing infrastructure that abuses mutual trust in order to propagate fallacious routes. The current detection techniques pathologically raise a large number of alerts, mostly composed of false positives resulting from benign routing practices. In this Dissertation, we seek the root cause of routing events beyond reasonable doubts. First, we reduce the global number of alerts by analyzing false positive alerts, from which we extract constructs that reflect real-world standard routing practices. We then consider the security threat associated with these constructs in a prefix hijacking scenario. Second, we use a variety of auxiliary datasets that reflect distinct facets of the networks involved in a suspicious routing event in order to closely approximate the ground-truth, which is traditionally only known by the network owner. Specifically, we investigate Multiple Origin AS (MOAS) prefixes, and introduce a classification that we use to discard up to 80% of false positive. Then we show a real-world case where a MOAS coincided with spam and web scam traffic. We look at prefix overlaps, clarify their global use, and present a prototype that discards around 50% of false positive sub-MOAS alerts. Finally, we explore the IP blackspace, study the routing-level characteristics of those networks, find live IP addresses, and uncover a large amount of spam and scam activities.
62

Prefixové omezení řízených gramatických systémů / Prefix Restriction of Regulated Grammar Systems

Konečný, Filip January 2008 (has links)
This thesis studies grammar systems whose components use sequences of productions whose left-hand sides are formed by nonterminal strings, not just single nonterminals. It introduces three restrictions on the derivations in these grammar systems. The first restriction requires that all rewritten symbols occur within the first l symbols of the first continuous block of nonterminals in the sentential form during every derivation step. The second restriction defines derivations over sentential forms containing no more than m continuous blocks of nonterminals. The third restriction extends the second in the way that each sequence of nonterminals must be of length h or less. As its main result, the thesis demonstrates that two of these restrictions decrease the generative power of grammar systems.
63

Controllable sentence simplification in Swedish : Automatic simplification of sentences using control prefixes and mined Swedish paraphrases

Monsen, Julius January 2023 (has links)
The ability to read and comprehend text is essential in everyday life. Some people, including individuals with dyslexia and cognitive disabilities, may experience difficulties with this. Thus, it is important to make textual information accessible to diverse target audiences. Automatic Text Simplification (ATS) techniques aim to reduce the linguistic complexity in texts to facilitate readability and comprehension. However, existing ATS systems often lack customization to specific user needs, and simplification data for languages other than English is limited. This thesis addressed ATS in a Swedish context, building upon novel methods that provide more control over the simplification generation process, enabling user customization. A dataset of Swedish paraphrases was mined from a large amount of text data. ATS models were then trained on this dataset utilizing prefix-tuning with control prefixes. Two sets of text attributes and their effects on performance were explored for controlling the generation. The first had been used in previous research, and the second was extracted in a data-driven way from existing text complexity measures. The trained ATS models for Swedish and additional models for English were evaluated and compared using SARI and BLEU metrics. The results for the English models were consistent with results from previous research using controllable generation mechanisms, although slightly lower. The Swedish models provided significant improvements over the baseline, in the form of a fine-tuned BART model, and compared to previous Swedish ATS results. These results highlight the efficiency of using paraphrase data paired with controllable generation mechanisms for simplification. Furthermore, the different sets of attributes provided very similar results, pointing to the fact that both these sets of attributes manage to capture aspects of simplification. The process of mining paraphrases, selecting control attributes and other methodological implications are discussed, leading to suggestions for future research.
64

Query Processing on Prefix Trees Live

Kissinger, Thomas, Schlegel, Benjamin, Habich, Dirk, Lehner, Wolfgang 17 August 2022 (has links)
Modern database systems have to process huge amounts of data and should provide results with low latency at the same time. To achieve this, data is nowadays typically hold completely in main memory, to benefit of its high bandwidth and low access latency that could never be reached with disks. Current in-memory databases are usually column-stores that exchange columns or vectors between operators and suffer from a high tuple reconstruction overhead. In this demonstration proposal, we present DexterDB, which implements our novel prefix tree-based processing model that makes indexes the first-class citizen of the database system. The core idea is that each operator takes a set of indexes as input and builds a new index as output that is indexed on the attribute requested by the successive operator. With that, we are able to build composed operators, like the multi-way-select-join-group. Such operators speed up the processing of complex OLAP queries so that DexterDB outperforms state-of-the-art in-memory databases. Our demonstration focuses on the different optimization options for such query plans. Hence, we built an interactive GUI that connects to a DexterDB instance and allows the manipulation of query optimization parameters. The generated query plans and important execution statistics are visualized to help the visitor to understand our processing model.
65

Parallel Viterbi Search For Continuous Speech Recognition On A Multi-Core Architecture

Parihar, Naveen 11 December 2009 (has links)
State-of-the-art speech-recognition systems can successfully perform simple tasks in real-time on most computers, when the tasks are performed in controlled and noiseree environments. However, current algorithms and processors are not yet powerful enough for real-time large-vocabulary conversational speech recognition in noisy, real-world environments. Parallel processing can improve the real-time performance of speech recognition systems and increase their applicability, and developing an effective approach to parallelization is especially important given the recent trend toward multi-core processor design. In this dissertation, we introduce methods for parallelizing a single-pass across-word n-gram lexical-tree based Viterbi recognizer, which is the most popular architecture for Viterbi-based large vocabulary continuous speech recognition. We parallelize two different open-source implementations of such a recognizer, one developed at Mississippi State University and the other developed at Rheinisch-Westfalische Technische Hochschule University in Germany. We describe three methods for parallelization. The first, called parallel fast likelihood computation, parallelizes likelihood computations by decomposing mixtures among CPU cores, so that each core computes the likelihood of the set of mixtures allocated to it. A second method, lexical-tree division, parallelizes the search management component of a speech recognizer by dividing the lexical tree among the cores. A third and alternative method for parallelizing the search-management component of a speech recognizer, called lexical-tree copies decomposition, dynamically distributes the active lexical-tree copies among the cores. All parallelization methods were tested on two and four cores of an Intel Core2 Quad processor and significantly improved real-time performance. Several challenges for parallelizing a lexical-tree based Viterbi speech recognizer are also identified and discussed.
66

Algoritmy pro vyhledání nejdelšího shodného prefixu / Longest Prefix Match Algorithms

Sedlář, František January 2013 (has links)
This master's thesis explains basics of the longest prefix match (LPM) problem. It analyzes and describes chosen LPM algorithms considering their speed, memory requirements and an ability to implement them in hardware. On the basis of former findings it proposes a new algorithm Generic Hash Tree Bitmap. It is much faster than many other approaches, while its memory requirements are even lower. An implementation of the proposed algorithm has become a part of the Netbench library.
67

[en] FAST DECODING PREFIX CODES / [pt] CÓDIGOS DE PREFIXO DE RÁPIDA DECODIFICAÇÃO

LORENZA LEAO OLIVEIRA MORENO 12 November 2003 (has links)
[pt] Mesmo com a evolução dos dispositivos de armazenamento e comunicação, mantém-se crescente a demanda por mecanismos de compressão de dados mais eficientes. Entre os compressores baseados na freqüência de símbolos, destacam - se os códigos livres de prefixo, que são executados por vários métodos compostos de diferentes algoritmos e também apresentam bom desempenho em uso isolado. Muitas pesquisas trouxeram maior eficiência aos códigos de prefixo, centradas, sobretudo, na redução do espaço de memória necessário e tempo gasto durante a descompressão. O presente trabalho abrange códigos de prefixos e respectivas técnicas de descompressão visando propor um novo codificador, o compressor LTL, que utiliza códigos com restrição de comprimento para reduzir o espaço de memória da tabela Look-up, eficiente método de decodificação. Devido ao uso de códigos restritos, é admitido um pequeno decréscimo nas taxas de compressão para possibilitar uma decodificação mais rápida. Os resultados obtidos indicam perda de compressão inferior a 11 por cento para um modelo baseado em caracteres, com velocidade média de decodificação cinco vezes maior que a de um decodificador canônico. Embora, para um modelo de palavras, o ganho médio de velocidade seja de 3,5, constata-se que, quando o número de símbolos é muito grande, o tamanho da tabela look-up impossibilita uma utilização eficiente da memória cache. Assim, o LTL é indicado para substituir quaisquer códigos de prefixo baseados em caracteres cuja aplicação requer agilidade no processo de descompressão. / [en] Even with the evolution of communication and storage devices, the use of complex data structures, like video and hypermedia documents, keeps increasing the demand for efficient data compression mechanisms. Prefix codes are one of the most known compressors, since they are executed by some compression methods that group different algorithms, besides presenting a good performance when used separately. A lot of approaches have been tried to improve the decoding speed of these codes. One major reason is that files are compressed and updated just a few times, whereas they have to be decompressed each time they are accessed. This work presents prefix codes and their decoding techniques in order to introduce a new coding scheme. In this scheme length-restricted codes are used to control the space requirements of the Look-up table, an efficient and fast prefix codes decoding method. Since restricted codewords are used, a small loss of compression efficiency is admitted. Empirical experiments indicate that this loss in the coded text is smaller than 11 percent if a character based model is used, and the observed average decoding speed is five times faster than the one for canonical codes. For a word based model, the average decoding speed is 3,5 times faster than a canonical decoder, but it decreases when a large number of symbols is used. Hence, this method is very suitable for applications where a character based model is used and extremely fast decoding is mandatory.
68

A Low Complexity Cyclic Prefix Reconstruction Scheme for Single-Carrier Systems with Frequency-Domain Equalization

Hwang, Ruei-Ran 25 August 2010 (has links)
The cyclic prefix (CP) is usually adopted in single carrier frequency domain equalization (SC-FDE) system to avoid inter-block interference (IBI) and inter-symbol interference (ISI) in multipath fading channels. In addition, the use of CP also converts the linear convolution between the transmitted signal and the channel into a circular convolution, leading to significant decrease in receiver equalization. However, the use of CP reduces the bandwidth efficiency. Therefore the SC-FDE system without CP is investigated in this thesis. A number of schemes have been proposed to improve the performance of systems without CP, where both IBI and ICI are dramatically increased. Unfortunately, most of the existing schemes have extremely high computational complexity and are difficult to realize. In this thesis, a novel low-complexity CP reconstruction (CPR) scheme is proposed for interference cancellation, where the successive interference cancellation (SIC) and QR decomposition (QRD) are adopted. In addition, the system performance is further improved by using the fact that the interferences of different symbols are not the same. Simulation experiments are conducted to verify the system performance of the proposed scheme. It is shown that the proposed scheme can effectively reduce the interference, while maintain a low computational complexity.
69

CP-Free Space-Time Block Coded MIMO-OFDM System Design Under IQ-Imbalance in Multipath Channel

Huang, Hsu-Chun 26 August 2010 (has links)
Orthogonal frequency division multiplexing (OFDM) systems with cyclic prefix (CP) can be used to protect signal from the time-variant multipath channel induced distortions. However, the presence of CP could greatly decrease the effective data rate, thus many recent research works have been focused on the multiple-input multiple-output (MIMO) OFDM systems without CP (CP-free), equipped with the space-time block codes (ST-BC). The constraint of the conventional MIMO-OFDM (without using the ST-BC) system is that the number of receive-antenna has to be greater than the transmit-antenna. In this thesis, we first consider the ST-BC MIMO-OFDM system and show that the above-mentioned constraint can be removed, such that the condition become that the receive antenna should be greater than one, that is the basic requirement for MIMO system. It is particular useful and confirm to the recently specification, e.g., 3GPP LTE (Long Term Evolution) where the system deploy the 2¡Ñ2 or 4¡Ñ4 antennas systems. This thesis also considers the effects of peak-to-average power ratio (PAPR) in the transmitter and In-phase/ Quadrature-phase (IQ) imbalance in the receiver, and solves them by using the adaptive Volterra predistorter and blind adaptive filtering approach of the nonlinear parameters estimation and compensation, along with the power measurement, respectively. After the compensator of IQ imbalance in the receiver, an equalizer under the framework of generalized sidelobe canceller (GSC) is derived for interference suppression. To further reduce the complexity of receiver implementation, the partially adaptive (PA) scheme is applied by exploiting the structural information of the signal and interference signature matrices. As demonstrated from computer simulation results, the performance of the proposed CP-free ST-BC MIMO-OFDM receiver is very similar to that obtained by the conventional CP-based ST-BC MIMO-OFDM system under either the predistortion or compensation scenario.
70

Blind Adaptive Receivers for Precoded SIMO DS-CDMA System

Li, Meng-Yi 08 August 2008 (has links)
The system capacity of the direct-sequence code division multiple access (DS-CDMA) system is limited mainly due to the multiple access interference (MAI), this is basically due to the incomplete orthogonality of spreading codes between different users. In wireless communication environments, the use of DS-CDMA system over multipath channels will introduce the effect of inter-symbol interference (ISI), thus the system performance might degrade, dramatically. To circumvent the above-mentioned problems many adaptive multiuser detectors are proposed, such as the minimum mean square error (MMSE) criteria subject to certain constraints. Unfortunately, with the MMSE receiver it requires an extra training sequence, which decreases the spectral efficiency. To increase the spectral efficiency, the blind adaptive receivers are adopted. In the conventional approach the blind adaptive receiver is developed based on the linear constrained minimum variance (LCMV) criteria, which can be viewed as the constrained version of the minimum output energy (MOE) criteria. Other alternative of designing the blind adaptive receiver is to use the linear constrained constant modulus (LCCM) criteria. In general, the LCCM receiver could achieve better robustness due to the changing environment of channel. With the above-mentioned adaptive linearly constrained multi-user receivers, we are able to reduce the effects of ISI and MAI and achieve desired system performance. However, for worse communication link, the conventional adaptive multi-user detector might not achieve desired performance and suppress interference effectively. In this thesis, we consider a new approach, in which the pre-coder similar to the Orthogonal Frequency Division Multiplexing (OFDM) systems is introduced in the transmitter of the DS-CDMA system. In the receiver, by using the characteristics of pre-coder we could remove the effect of ISI, effectively, and follows by the adaptive multi-user detector to suppress the MAI. Two most common use pre-coders of the OFDM systems are the Cyclic Prefix (CP) or Zero Padding (ZP). Thus the pre-coded DS-CDMA systems associated with the adaptive blind linearly constrained receiver could be employed to further improve the system performance with the cost of decreasing the spectral efficiency.

Page generated in 0.0403 seconds