• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 34
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Scrambling av databaser : Validering och implementering av scrambling av databas

Öberg, Fredrik January 2019 (has links)
The demands on how personal data is handled have recently become much more strict with new regulations such as GDPR. Which means companies need to review how they save and manage data. Furthermore, there is a whole indust- ry that works with analyzing and anonymizing databases to create testdata for companies to use for tests. How can these companies guarantee that they can hand over their database for this particular purpose. Easit AB wants a system to be built for scrambling databases so that the structure and data in the database are unrecognizable, which can then be submitted to Easit for analysis.With the main objective, using existing functionality in the Easit Test Engine ETE, see if you can scramble customers databases and data to unrecognizable so that the handover of the database can be done without risk. But also to validate the scrambling methods that the solution contains. / Kraven hur personlig data hanteras har på senare tid blivit mycket mer strikta med nya förordningar som GDPR. Vilket betyder att företag måste se över hur dom spara och hanterar data. Vidare så finns det en hel bransch som jobbar med att analysera och anonymisera databaser för att skapa testdata för företag att an- vända för tester. Hur kan dessa företag garantera att de kan lämna över deras da- tabas för just detta. Easit AB vill att ett system ska byggas för att scrambla data- baser så att struktur och data i databasen är oigenkännligt som sedan kan läm- nas över till Easit för analysering. Med Huvudmålet att med hjälp av befintlig funktionalitet i Easit Test Engine ETE kunna scrambla kunders databaser och data till oigenkännlighet så att överlämning av databasen kan ske utan risk för tester. Men även att validera de scramblingmetoder som lösningen innehåller.
22

Information structure in linguistic theory and in speech production : validation of a Cross-Linguistic data set

Hellmuth, Sam, Skopeteas, Stavros January 2007 (has links)
The aim of this paper is to validate a dataset collected by means of production experiments which are part of the Questionnaire on Information Structure. The experiments generate a range of information structure contexts that have been observed in the literature to induce specific constructions. This paper compares the speech production results from a subset of these experiments with specific claims about the reflexes of information structure in four different languages. The results allow us to evaluate and in most cases validate the efficacy of our elicitation paradigms, to identify potentially fruitful avenues of future research, and to highlight issues involved in interpreting speech production data of this kind.
23

Video Encryption / Video Encryptioning

Yilmaz, Fatih Levent January 2011 (has links)
Video Encryption is nearly the best method for blocking unwanted seizures and viewing of any transmitted video or information. There are several useful techniques that are available for encryping videos. However, one of the unique speciality for human eye is spotting the irregularity in videos due to weak video decoding or weak choice of video encryption hardware. Because of this situation, it is very important to select the right hardware or else our video transmissions may not be secured or our decoded video may be un-watchable. Every technique has advantages and disadvantages over other technical methods.   Line-cut and rotate video encryption method is maybe the best way of acquiring safe, secured and good quality encypted videos. In this method, every line in the video frame cuts and rotates from different points and these cut points are created from a random matrix. The advantage of this method is to supply a coherent video signal, gives an excellent amount of darkness, as well as good decode quality and stableness. On the other hand it’s disadvantages is to have complex timing control and needs specialized encryption equipment.
24

Scrambling and Complexity in AdS/CFT and Black Holes / AdS/CFT対応とブラックホールにおけるスクランブリングと計算複雑度

Watanabe, Kento 26 March 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第20916号 / 理博第4368号 / 新制||理||1627(附属図書館) / 京都大学大学院理学研究科物理学・宇宙物理学専攻 / (主査)教授 高柳 匡, 教授 川合 光, 教授 杉本 茂樹 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
25

PAPR Reduction Schemes Based on Spreading Code Combination and Subcarrier Scrambling for MC-CDMA Systems

Lee, Ming-Kai 23 August 2011 (has links)
In order to improve the drawback of the high peak-to-average power ratio (PAPR) on the multi-carrier systems, in this paper we derive a statistical characterization approach of the time domain signal power variance metric by means of every user¡¦s spreading code combination and subcarrier scrambling. We obviously reduce the PAPR on the multi-carrier code division multiple access (MC-CDMA) systems by resorting the combination of spreading codes and scrambling the polarities of subcarriers. Due to the large calculative number of exhaustive search, we use a low complexity Replacement Search Method (RSM) to reduce the calculation times of searching, and obtain a good result. Moreover, we can get a better PAPR reduction performance by increasing the number of iteration.
26

Prototyp för att öka exponeringen av skönlitteratur på internet

Viderberg, Arvid, Hammersberg, Hampus January 2018 (has links)
På internet idag genereras information för att exponera böcker manuellt. Det är information som till exempel genre, författare, platser och sammanfattning. Böckernas fullständiga text är inte tillgänglig publikt på internet på grund av upphovsrättslagen och av den anledningen går det inte att automatiskt generera denna typ av information. En lösning är att konstruera en prototyp som behandlar originalverket och automatisk genererar information som kan exponeras på internet, utan att exponera hela verket. Denna rapport jämfört tre olika algoritmer som behandlar böcker: utbrytning av ordstam, stoppordsfiltrering och blandning av meningar inom stycken. Algoritmerna är jämförda med avseende på generering av relevant information till tjänsterna: sökmotorer, automatisk metadata, smarta annonser och textsammanfattning. Sökmotorer låter en användare söka på exempelvis bokens titel eller en mening ur boken. Automatisk metadata bryter automatiskt ut beskrivande information från boken. Smarta annonser använder beskrivande information för att rekommendera och marknadsföra böcker. Textsammanfattning kan skapa en kort, beskrivande sammanfattning av boken automatiskt. Informationen som sparas från böckerna ska endast vara relevant information till tjänsterna. Informationen ska inte heller har något litterärt värde1 för en människa. Resultatet av arbetet visar att kombinationerna blandning av meningar →stoppordsfiltrering och stoppordsfiltrering →blandning av meningar är optimala i form av sökbarhet. Det är också rekommenderat att lägga till utbrytning av ordstam som ett extra steg i behandlingen av originalverket, eftersom det genererar mer relevant automatisk metadata till boken. / On the internet today, information to expose books is generated manually. That includes information such as genre, author, places, and summary. The full text of books are not publicly available on the Internet due to copyright law, and for this reason it is not possible to generate this type of information automatically. One solution is to construct a prototype that processes the original book and automatically generates information that can be exposed to the Internet, without exposing the entire book. In this report, three different algorithms that deal with processing books are compared: stemming, filtering of stop words and scrambling of sentences within paragraphs. The algorithms are compared by generating relevant information to the services: search engines, automatic metadata, smart ads and text analysis. Search engines allows a user to search for e.g. the title or a sentence from the book. Automatic metadata automatically breaks out descriptive information from the book. Smart ads can use descriptive information to recommend and promote books. Text analysis can be used to automatically create a brief descriptive summary. The information stored from the books should only be relevant information for the services and the information should not have any literal value2 for a human to read. The result of the work shows that the combinations scrambling of sentences→filtering of stop words and filtering of stop words→scramlbing of sentences are optimal in terms of searchability. It is also recommended to add stemming as an additional step in the processing of the original book, as it generates more relevant automatic metadata to the book.
27

Transceiver a TNC pro datovou komunikaci na UHF s obvodem CC1020 / Transceiver and TNC for Data Communication in UHF Band with CC1020 Chip

Hlavica, Petr January 2008 (has links)
The aim of Master’s thesis Transceiver and TNC for Data Communication in UHF Band with CC1020 Chip is design of unit, which provides a data transfer by the packet radio net. It is an experiment, whether CC1020 chip is possible to use for TNC design. The thesis consists of study AX25 a KISS protocols, study of CC1020 features, design of PA and LNA and programming control software for Atmel AVR microcontroller.
28

Novel Transport in Quantum Phases and Entanglement Dynamics Beyond Equilibrium

Szabo, Joseph Charles 06 September 2022 (has links)
No description available.
29

カラの主語性に関する研究 : コーパス検索および文処理実験

TAMAOKA, Katsuo, MU, Xin, 玉岡, 賀津雄, 穆, 欣 05 December 2014 (has links)
No description available.
30

Détection itérative des séquences pseudo-aléatoires / Iterative detection of pseudo-random sequences

Bouvier des Noes, Mathieu 15 October 2015 (has links)
Les séquences binaires pseudo-aléatoires sont couramment employées par les systèmes de transmissions numériques ou des mécanismes de chiffrement. On les retrouve en particulier dans les transmissions par étalement de spectre par séquence direct (e.g. 3G ou GPS)) ou pour construire des séquences d'apprentissage pour faciliter la synchronisation ou l'estimation du canal (e.g. LTE). Un point commun à toutes ces applications est la nécessité de se synchroniser avec la séquence émise. La méthode conventionnelle consiste à générer la même séquence au niveau du récepteur et la corréler avec le signal reçu. Si le résultat dépasse un seuil pré-défini, la synchronisation est déclarée acquise. On parle alors de détection par corrélation.Cette thèse aborde une autre voie : la détection des séquences binaires pseudo-aléatoire par des techniques de décodage canal. Ceci permet par exemple de détecter des séquences longues (e.g. de période 242), contrairement aux techniques par corrélation qui sont trop complexes à implémenter. Cela nécessite néanmoins que le récepteur connaisse au préalable le polynôme générateur de la séquence.Nous avons montré que le décodage d'une séquence pseudo-aléatoire est une problématique du type 'détecte et décode'. Le récepteur détecte la présence de la séquence et simultanément estime son état initial. Ceci correspond dans la théorie classique de la détection à un détecteur de type GLRT qui ne connaît pas la séquence émise, mais qui connaît sa méthode de construction. L'algorithme implémente alors un GLRT qui utilise un décodeur pour estimer la séquence reçue. Ce dernier est implémenté avec un algorithme de décodage par passage de messages qui utilise une matrice de parité particulière. Elle est construite avec des équations de parités différentes, chacune ayant un poids de Hamming valant t.Il correspond au nombre de variables participants à l'équation.Les équations de parité sont un constituant indispensable du décodeur. Nous avons donné leur nombre pour les m-séquences et les séquences de Gold. Pour le cas particulier des séquences de Gold, nous avons calculé le nombre d'équations de parité de poids t=5 lorsque le degré du polynôme générateur r est impair. Ce calcul est important car il n'y a pas d'équations de parité de poids t < 5 lorsque r est impair. Le nombre d'équations de parité est aussi utilisé pour estimer le degré minimal des équations d'un poids t donné. Nous avons montré que le modèle de prédiction estime correctement la valeur moyenne du degré minimal de l'ensemble des séquences de Gold. Nous avons néanmoins mis en évidence une grande variabilité du degré minimal des séquences autour de cette valeur moyenne.Nous avons ensuite identifié les ensembles absorbants complets de plus petite taille lorsque le décodeur emploie plusieurs polynômes de parité. Ces ensembles bloquent la convergence du décodeur lorsque celui-ci est alimenté avec du bruit. Ceci évite les fausses alarmes lors du processus de détection. Nous avons montré que des cycles 'transverses' détruisent ces ensembles absorbants, ce qui génère des fausses alarmes. Nous en avons déduit un algorithme qui minimise le nombre de cycles transverses de longueur 6 et 8, ce qui minimise la probabilité de fausse alarme lorsque le poids des équations de parité vaut t=3. Notre algorithme permet de sélectionner les équations de parité qui minimisent la probabilité de fausse alarme et ainsi réduire notablement le temps d'acquisition d'une séquence de Gold.Nous avons enfin proposé deux algorithmes de détection du code d'embrouillage pour les systèmes WCDMA et CDMA2000. Ils exploitent les propriétés des m-séquences constituant les séquences de Gold, ainsi que les mécanismes de décodage par passage de messages. Ces algorithmes montrent les vulnérabilités des transmissions par étalement de spectre. / Pseudo-random binary sequences are very common in wireless transmission systems and ciphering mechanisms. More specifically, they are used in direct sequence spread spectrum transmission systems like UMTS or GPS, or to construct preamble sequences for synchronization and channel estimation purpose like in LTE. It is always required to synchronize the receiver with the transmitted sequence. The usual way consists in correlating the received signal with a replica of the sequence. If the correlation exceeds a predefined threshold, the synchronization is declared valid.This thesis addresses a different approach: the binary sequence is detected with a forward error correction decoding algorithm. This allows for instance to detect very long sequences.In this thesis, we show that decoding a pseudo-random sequence is a problematic of the kind ‘detect and decode'. The decoder detects the presence of the transmitted sequence and simultaneously estimates its initial state. In conventional detection theory, this corresponds to a GLRT detector that uses a decoder to estimate the unknown parameter which is the transmitted sequence. For pseudo-random sequences, the decoder implements an iterative message-passing algorithm. It uses a parity check matrix to define the decoding graph on which the algorithm applies. Each parity check equation has a weight t, corresponding to the number of variables in the equation.Parity check equations are thus an essential component of the decoder. The decoding procedure is known to be sensitive to the weight t of the parity check equations. For m-sequences, the number of parity check equations is already known. It is given by the number of codewords of weight t of the corresponding Hamming dual code. For Gold sequences, the number of parity check equations of weight t = 3 and 4 has already been evaluated by Kasami. In this thesis we provide an analytical expression for the number of parity check equations of weight t = 5 when the degree of the generator polynomial r is odd. Knowing this number is important because there is no parity check equation of weight t < 5 when r is odd. This enumeration is also used to provide an estimation of the least degree of parity check equations of weight t.We have then addressed the problem of selecting the parity check equations used by the decoder. We observed the probability of false alarm is very sensitive to this selection. It is explained by the presence or absence of absorbing sets which block the convergence of the decoder when it is fed only with noise. These sets are known to be responsible for error floor of LDPC codes. We give a method to identify these sets according to the parity check equations used by the decoder. The probability of false alarm can increase dramatically if these absorbing sets are destroyed. Then we propose an algorithm for selecting these parity check equations. It relies on the minimization of the number of cycles of length 6 and 8. Simulation show that the algorithm allows to improve significantly the probability of false alarm and the average acquisition time.Eventually, we propose 2 algorithms for the detection of the scrambling codes used in the uplink of UMTS-FDD and CDMA2000 systems. They highlights a new vulnerability of DSSS transmission systems. It is now conceivable to detect these transmission if the sequence's generator is known.

Page generated in 0.1078 seconds