• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 18
  • 10
  • 8
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 126
  • 53
  • 51
  • 29
  • 21
  • 19
  • 19
  • 18
  • 17
  • 14
  • 14
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

SPECTRAL EFFICIENCY OF 8-ARY PSK MODULATION UTILIZING SQUARE ROOT RAISED COSINE FILTERING

Scheidt, Kelly J. 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / As frequency allocation restrictions are tightening, and data rates are increasing, it is becoming necessary to incorporate higher order modulation techniques to make more efficient use of available spectrum. When used with Square Root Raised Cosine filtering, 8-ary Phase Shift Keyed modulation is a spectrally efficient technique that makes better use of today’s RF spectrum in comparison to standard formats. This paper will discuss 8-ary PSK modulation and its spectral efficiency with a SRRC filter, along with comparisons to BPSK, QPSK, and FSK.
22

A reformulation of Coombs' Theory of Unidimensional Unfolding by representing attitudes as intervals

Johnson, Timothy Kevin January 2004 (has links)
An examination of the logical relationships between attitude statements suggests that attitudes can be ordered according to favourability, and can also stand in relationships of implication to one another. The traditional representation of attitudes, as points on a single dimension, is inadequate for representing both these relations but representing attitudes as intervals on a single dimension can incorporate both favourability and implication. An interval can be parameterised using its two endpoints or alternatively by its midpoint and latitude. Using this latter representation, the midpoint can be understood as the �favourability� of the attitude, while the latitude can be understood as its �generality�. It is argued that the generality of an attitude statement is akin to its latitude of acceptance, since a greater semantic range increases the likelihood of agreement. When Coombs� Theory of Unidimensional Unfolding is reformulated using the interval representation, the key question is how to measure the distance between two intervals on the dimension. There are innumerable ways to answer this question, but the present study restricts attention to eighteen possible �distance� measures. These measures are based on nine basic distances between intervals on a dimension, as well as two families of models, the Minkowski r-metric and the Generalised Hyperbolic Cosine Model (GHCM). Not all of these measures are distances in the strict sense as some of them fail to satisfy all the metric axioms. To distinguish between these eighteen �distance� measures two empirical tests, the triangle inequality test, and the aligned stimuli test, were developed and tested using two sets of attitude statements. The subject matter of the sets of statements differed but the underlying structure was the same. It is argued that this structure can be known a priori using the logical relationships between the statement�s predicates, and empirical tests confirm the underlying structure and the unidimensionality of the statements used in this study. Consequently, predictions of preference could be ascertained from each model and either confirmed or falsified by subjects� judgements. The results indicated that the triangle inequality failed in both stimulus sets. This suggests that the judgement space is not metric, contradicting a common assumption of attitude measurement. This result also falsified eleven of the eighteen �distance� measures because they predicted the satisfaction of the triangle inequality. The aligned stimuli test used stimuli that were aligned at the endpoint nearest to the ideal interval. The results indicated that subjects preferred the narrower of the two stimuli, contrary to the predictions of six of the measures. Since these six measures all passed the triangle inequality test, only one measure, the GHCM (item), satisfied both tests. However, the GHCM (item) only passes the aligned stimuli tests with additional constraints on its operational function. If it incorporates a strictly log-convex function, such as cosh, the GHCM (item) makes predictions that are satisfied in both tests. This is also evidence that the latitude of acceptance is an item rather than a subject or combined parameter.
23

Efficient Memory Arrangement Methods and VLSI Implementations for Discrete Fourier and Cosine Transforms

Hsu, Fang-Chii 24 July 2001 (has links)
The thesis proposes using the efficient memory arrangement methods for the implementation of radix-r multi-dimensional Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT). By using the memory instead of the registers to buffer and reorder data, hardware complexity is significantly reduced. We use the recursive architecture that requires only one arithmetic-processing element to compute the entire DFT/DCT operation. The algorithm is based on efficient coefficient matrix factorization and data allocation. By exploiting the features of Kronecker product representation in the fast algorithm, the multi-dimensional DFT/DCT operation is converted into its corresponding 1-D problem and the intermediate data is stored in several memory units. In addition to the smaller area, we also propose a method to reduce the power consumption of the DFT/DCT processors.
24

Skaitmeninių vandens ženklų naudojimas paveikslėliuose / Digital watermarking for image

Zenkevičiūtė, Irma 01 September 2011 (has links)
Būtinybė apsaugoti autorinių teisių savininkus nuo neteisėto intelektinės nuosavybės naudojimosi ir platinimo vis didėja. Tam puikiai tinka skaitmeniniai vandens ženklai, kurie ne tik apsaugo nuo neleistino intelektinės nuosavybės naudojimosi, bet pasinaudojus vandens ženklais galima susekti neteisėtą vartotoją. Šiame darbe aptarta diskrečioji kosinuso transformacija (angl. Discrete Cosine Transform (DCT)) ir išplėsto spektro (angl. Spread-Spectrum) ženklinimo vandens ženklais metodai, plačiau išnagrinėtas DCT metodas ir patikrinta keletas paprastų vaizdo atakų, kaip antai: karpymas, suspaudimas, pasukimas ir pan. Dalies vaizdo iškirpimas, pasukimas gali neatpažįstamai pakeisti vandens ženklą. Taip nutinka, nes iš pakeisto paveikslėlio ištraukiant vandens ženklą priešingu įterpimui metodu pasikeičia vaizdo taškų koordinatės, o iškirpus dalies koordinačių trūksta. Priklausomai nuo posūkio kampo ar iškirptos dalies dydžio sugadinamas ir vandens ženklas, kuris nepasikeičia keičiant spalvą ar kontrastą. / The necessity to protect copyright owners from using and sharing illegal intellectual property still grows. Digital water marks fits perfectly for this matter. They not only prevent using illegal intellectual property, but they can also detect illegal user. In this paper the discrete cosine transform (DCT) and spread-spectrum water-marking methods were discussed, and DCT method was analysed, and several image attacks were tested, such as: trimming, compression, rotation and etc. The trimming a part of an image, rotation of an image can change water mark beyond recognition. It happens because taking water mark out from changed image using inverse method of insertion image pixel coordinates change, and trimming causes a missing of part of coordinates. Depending on the angle of rotation or the size of a trimmed part a water mark is corrupted, but it does not change when a colour or a contrast is being changed.
25

Digital implementation of high speed pulse shaping filters and address based serial peripheral interface design

Rachamadugu, Arun 19 November 2008 (has links)
A method to implement high-speed pulse shaping filters has been discussed. This technique uses a unique look up table based architecture implemented in 90nm CMOS using a standard cell based ASIC flow. This method enables the implementation of pulse shaping filters for multi-giga bit per second data transmission. In this work a raised cosine FIR filter operating at 4 GHz has been designed. Various Implementation issues and solutions encountered during the synthesis and layout stages have been discussed. In the second portion of this work, the design of a unique address based serial peripheral interface (SPI) for initializing, calibrating and controlling various blocks in a large system has been discussed. Some modifications have been made to the standard four-wire SPI protocol to enable high control speeds with lesser number of top-level pads. This interface has been designed to function in the duplex mode to do both read and write operations.
26

A reformulation of Coombs' Theory of Unidimensional Unfolding by representing attitudes as intervals

Johnson, Timothy Kevin January 2004 (has links)
An examination of the logical relationships between attitude statements suggests that attitudes can be ordered according to favourability, and can also stand in relationships of implication to one another. The traditional representation of attitudes, as points on a single dimension, is inadequate for representing both these relations but representing attitudes as intervals on a single dimension can incorporate both favourability and implication. An interval can be parameterised using its two endpoints or alternatively by its midpoint and latitude. Using this latter representation, the midpoint can be understood as the �favourability� of the attitude, while the latitude can be understood as its �generality�. It is argued that the generality of an attitude statement is akin to its latitude of acceptance, since a greater semantic range increases the likelihood of agreement. When Coombs� Theory of Unidimensional Unfolding is reformulated using the interval representation, the key question is how to measure the distance between two intervals on the dimension. There are innumerable ways to answer this question, but the present study restricts attention to eighteen possible �distance� measures. These measures are based on nine basic distances between intervals on a dimension, as well as two families of models, the Minkowski r-metric and the Generalised Hyperbolic Cosine Model (GHCM). Not all of these measures are distances in the strict sense as some of them fail to satisfy all the metric axioms. To distinguish between these eighteen �distance� measures two empirical tests, the triangle inequality test, and the aligned stimuli test, were developed and tested using two sets of attitude statements. The subject matter of the sets of statements differed but the underlying structure was the same. It is argued that this structure can be known a priori using the logical relationships between the statement�s predicates, and empirical tests confirm the underlying structure and the unidimensionality of the statements used in this study. Consequently, predictions of preference could be ascertained from each model and either confirmed or falsified by subjects� judgements. The results indicated that the triangle inequality failed in both stimulus sets. This suggests that the judgement space is not metric, contradicting a common assumption of attitude measurement. This result also falsified eleven of the eighteen �distance� measures because they predicted the satisfaction of the triangle inequality. The aligned stimuli test used stimuli that were aligned at the endpoint nearest to the ideal interval. The results indicated that subjects preferred the narrower of the two stimuli, contrary to the predictions of six of the measures. Since these six measures all passed the triangle inequality test, only one measure, the GHCM (item), satisfied both tests. However, the GHCM (item) only passes the aligned stimuli tests with additional constraints on its operational function. If it incorporates a strictly log-convex function, such as cosh, the GHCM (item) makes predictions that are satisfied in both tests. This is also evidence that the latitude of acceptance is an item rather than a subject or combined parameter.
27

Αναδιάταξη μονάδων ψηφιακής επεξεργασίας σημάτων βάσει των μεταβαλλόμενων αναγκών σε δυναμική περιοχή

Χρηστίδης, Γεώργιος 05 January 2011 (has links)
Η μείωση της κατανάλωσης ισχύος αποτελεί το πιο σημαντικό πρόβλημα στα ψηφιακά ηλεκτρονικά κυκλώματα. Διάφορες μέθοδοι έχουν προταθεί, μεταξύ αυτών η χρήση επεξεργαστών δυναμικά μεταβαλλόμενου μήκους λέξης. Με αυτόν τον τρόπο, στους υπολογισμούς που απαιτείται μέγιστη ακρίβεια ο επεξεργαστής μπορεί να χρησιμοποιεί το μέγιστο δυνατό μήκος λέξης, ενώ σε αυτούς που η χαμηλή κατανάλωση ισχύος είναι ο κύριος στόχος μπορεί να χρησιμοποιεί μικρότερο μήκος λέξης. Τέτοιες απαιτήσεις συναντούνται συχνά σε εφαρμογές ψηφιακής επεξεργασίας σήματος, όπως για παράδειγμα στην κωδικοποίηση εικόνας. Για το λόγο αυτό μελετήθηκε ο αντίστροφος διακριτός μετασχηματισμός συνημιτόνου, ο οποίος αποτελεί το πιο ενεργοβόρο κομμάτι στην κωδικοποίηση εικόνας και η σχέση της ακρίβειάς του με το μήκος λέξης του επεξεργαστή. Στη συνέχεια κατασκευάστηκαν οι δομικές μονάδες για τις αριθμητικές πράξεις του επεξεργαστή, αθροιστές, αφαιρέτες και πολλαπλασιαστές με δύο διαφορετικά μήκη λέξης και τέλος οι υπόλοιπες μονάδες του. Τα αποτελέσματα της σύνθεσής του δείχνουν ότι απαιτεί περισσότερες πύλες για την κατασκευή του από έναν αντίστοιχο σταθερού μήκους, όμως προσφέρει πολλά πλεονεκτήματα στη μείωση της κατανάλωσης. / Power saving is today's most important problem in digital circuits. Several methods have been proposed, including the use of a dynamically changing processor wordlength. With the adoption of this technique, calculations requiring maximum accuracy would use the maximum processor wordlength, while in those where low power is the main target a smaller wordlength could be used. Such requirements are frequently found in digital signal processing applications, such as image coding. Consequently, this diploma thesis studies the inverse discrete cosine transform, which is the most power-intensive part in image coding and the relation of its accuracy to the processor wordlength. After that, the structure of the blocks of the arithmetic and logic unit is explained, in order for the adders, subtracters and multipliers to be constructed with two different wordlengths and finally the remaining units of the processor are designed. The synthesis results show that this processor requires more gates. On the other hand, it offers many advantages in static and dynamic power reduction.
28

Lobes de cosinus et visibilité pour la simulation d'éclairage / Cosine lobes and visibility for lighting simulation

Perrot, Romuald 07 December 2012 (has links)
La simulation des réflexions lumineuses multiples à l'intérieur d'un environnement nécessite de résoudre une intégrale de premier ordre, récursive infinie, pour laquelle il n'existe pas de solution analytique dans le cas général. Certaines méthodes permettent de donner une solution théorique exacte, mais avec des temps de calcul trop important pour espérer produire plusieurs images par seconde dans un avenir proche. De nombreuses méthodes permettent de réaliser ces calculs de manière plus rapide, mais elles reposent sur des approximations dont les effets sont souvent visibles sur les images produites. Notre objectif est de proposer des solutions permettant de réduire les erreurs de calculs en exploitant deux approches complémentaires : (i) une homogénéisation des termes de l'équation de manière à la résoudre seulement à l'aide de quelques opérateurs simples ; (ii) la prise en compte précise des informations de visibilité pour réduire le biais des méthodes reposant sur une estimation de densité. A terme, notre objectif est de diminuer le coût des requêtes de visibilité de nos deux contributions. Pour cela nous envisageons notamment d'introduire des calculs hiérarchiques de visibilité de façon à amortir le coût global des requêtes. / Simulating light transfer within a virtual environment requires to solve a first order, infinite recursive integral, that unfortunately doesn't have any solution in general cases. Though theoretically exact solutions exist, their computing time is not adapted to real-time rendering in a near future. Many methods have been proposed for accelerating these computations, but they rely on approximations that often produce visible artifacts on the resulting images. Our goal is to propose some new solutions that can reduce biases with two complementary approaches : (i) a new homogeneous representation of each term of the equation can be used to resolve it using only simple operators ; (ii) considering precise visibility information in order to reduce bias of methods that rely on density estimation. On the long range, we aim at reducing visibility requests costs of each contribution. For that purpose, we particularly plan to introduce hierarchical visibility computations so as to amortize queries cost.
29

Digital watermarking of still images

Ahmed, Kamal Ali January 2013 (has links)
This thesis presents novel research work on copyright protection of grey scale and colour digital images. New blind frequency domain watermarking algorithms using one dimensional and two dimensional Walsh coding were developed. Handwritten signatures and mobile phone numbers were used in this project as watermarks. In this research eight algorithms were developed based on the DCT using 1D and 2D Walsh coding. These algorithms used the low frequency coefficients of the 8 × 8 DCT blocks for embedding. A shuffle process was used in the watermarking algorithms to increase the robustness against the cropping attacks. All algorithms are blind since they do not require the original image. All algorithms caused minimum distortion to the host images and the watermarking is invisible. The watermark is embedded in the green channel of the RGB colour images. The Walsh coded watermark is inserted several times by using the shuffling process to improve its robustness. The effect of changing the Walsh lengths and the scaling strength of the watermark on the robustness and image quality were studied. All algorithms are examined by using several grey scale and colour images of sizes 512 × 512. The fidelity of the images was assessed by using the peak signal to noise ratio (PSNR), the structural similarity index measure (SSIM), normalized correlation (NC) and StirMark benchmark tools. The new algorithms were tested on several grey scale and colour images of different sizes. Evaluation techniques using several tools with different scaling factors have been considered in the thesis to assess the algorithms. Comparisons carried out against other methods of embedding without coding have shown the superiority of the algorithms. The results have shown that use of 1D and 2D Walsh coding with DCT Blocks offers significant improvement in the robustness against JPEG compression and some other image processing operations compared to the method of embedding without coding. The originality of the schemes enables them to achieve significant robustness compared to conventional non-coded watermarking methods. The new algorithms offer an optimal trade-off between perceptual distortion caused by embedding and robustness against certain attacks. The new techniques could offer significant advantages to the digital watermark field and provide additional benefits to the copyright protection industry.
30

The Lifted Heston Stochastic Volatility Model

Broodryk, Ryan 04 January 2021 (has links)
Can we capture the explosive nature of volatility skew observed in the market, without resorting to non-Markovian models? We show that, in terms of skew, the Heston model cannot match the market at both long and short maturities simultaneously. We introduce Abi Jaber (2019)'s Lifted Heston model and explain how to price options with it using both the cosine method and standard Monte-Carlo techniques. This allows us to back out implied volatilities and compute skew for both models, confirming that the Lifted Heston nests the standard Heston model. We then produce and analyze the skew for Lifted Heston models with a varying number N of mean reverting terms, and give an empirical study into the time complexity of increasing N. We observe a weak increase in convergence speed in the cosine method for increased N, and comment on the number of factors to implement for practical use.

Page generated in 0.0399 seconds