• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 80
  • 36
  • 26
  • 26
  • 10
  • 9
  • 9
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 513
  • 160
  • 150
  • 70
  • 57
  • 52
  • 44
  • 43
  • 40
  • 37
  • 37
  • 36
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Compressed Domain Processing of MPEG Audio

Anantharaman, B 03 1900 (has links)
MPEG audio compression techniques significantly reduces the storage and transmission requirements for high quality digital audio. However, compression complicates the processing of audio in many applications. If a compressed audio signal is to be processed, a direct method would be to decode the compressed signal, process the decoded signal and re-encode it. This is computationally expensive due to the complexity of the MPEG filter bank. This thesis deals with processing of MPEG compressed audio. The main contributions of this thesis are a) Extracting wavelet coefficients in the MPEG compressed domain. b) Wavelet based pitch extraction in MPEG compressed domain. c) Time Scale Modifications of MPEG audio. d) Watermarking of MPEG audio. The research contributions starts with a technique for calculating several levels of wavelet coefficients from the output of the MPEG analysis filter bank. The technique exploits the toeplitz structure which arises when the MPEG and wavelet filter banks are represented in a matrix form, The computational complexity for extracting several levels of wavelet coefficients after decoding the compressed signal and directly from the output of the MPEG analysis filter bank are compared. The proposed technique is found to be computationally efficient for extracting higher levels of wavelet coefficients. Extracting pitch in the compressed domain becomes essential when large multimedia databases need to be indexed. For example one may be interested in listening to a particular speaker or to listen to male female audio segments in a multimedia document. For this application, pitch information is one of the very basic and important features required. Pitch is basically the time interval between two successive glottal closures. Glottal closures are accompanied by sharp transients in the speech signal which in turn gives rise to a local maxima in the wavelet coefficients. Pitch can be calculated by finding the time interval between two successive maxima in the wavelet coefficients. It is shown that the computational complexity for extracting pitch in the compressed domain is less than 7% of the uncompressed domain processing. An algorithm for extracting pitch in the compressed domain is proposed. The result of this algorithm for synthetic signals, and utterances of words by male/female is reported. In a number of important applications, one needs to modify an audio signal to render it more useful than its original. Typical applications include changing the time evolution of an audio signal (increase or decrease the rate of articulation of a speaker),or to adapt a given audio sequence to a given video sequence. In this thesis, time scale modifications are obtained in the subband domain such that when the modified subband signals are given to the MPEG synthesis filter bank, the desired time scale modification of the decoded signal is achieved. This is done by making use of sinusoidal modeling [I]. Here, each of the subband signal is modeled in terms of parameters such as amplitude phase and frequencies and are subsequently synthesised by using these parameters with Ls = k La where Ls is the length of the synthesis window , k is the time scale factor and La is the length of the analysis window. As the PCM version of the time scaled signal is not available, psychoacoustic model based bit allocation cannot be used. Hence a new bit allocation is done by using a subband coding algorithm. This method has been satisfactorily tested for time scale expansion and compression of speech and music signals. The recent growth of multimedia systems has increased the need for protecting digital media. Digital watermarking has been proposed as a method for protecting digital documents. The watermark needs to be added to the signal in such a way that it does not cause audible distortions. However the idea behind the lossy MPEC encoders is to remove or make insignificant those portions of the signal which does not affect human hearing. This renders the watermark insignificant and hence proving ownership of the signal becomes difficult when an audio signal is compressed. The existing compressed domain methods merely change the bits or the scale factors according to a key. Though simple, these methods are not robust to attacks. Further these methods require original signal to be available in the verification process. In this thesis we propose a watermarking method based on spread spectrum technique which does not require original signal during the verification process. It is also shown to be more robust than the existing methods. In our method the watermark is spread across many subband samples. Here two factors need to be considered, a) the watermark is to be embedded only in those subbands which will make the addition of the noise inaudible. b) The watermark should be added to those subbands which has sufficient bit allocation so that the watermark does not become insignificant due to lack of bit allocation. Embedding the watermark in the lower subbands would cause distortion and in the higher subbands would prove futile as the bit allocation in these subbands are practically zero. Considering a11 these factors, one can introduce noise to samples across many frames corresponding to subbands 4 to 8. In the verification process, it is sufficient to have the key/code and the possibly attacked signal. This method has been satisfactorily tested for robustness to scalefactor, LSB change and MPEG decoding and re-encoding.
402

Perceptual Criterion Based Rate Control And Fast Mode Search For Spatial Intra Prediction In Video Coding

Nagori, Soyeb 05 1900 (has links)
This thesis dwells on two important problems in the field of video coding; namely rate control and spatial domain intra prediction. While the former is applicable generally to most video compression standards, the latter applies to recent advanced video compression standards such as H.264, VC1 and AVS. Rate control regulates the instantaneous video bit-rate to maximize a picture quality metric while satisfying channel rate and buffer size constraints. Rate control has an important bearing on the picture quality of encoded video. Typically, a quality metric such as Peak Signal-to-Noise ratio (PSNR) or weighted signal-to-noise ratio (WSNR) is chosen out of convenience. However neither metric is a true measure of perceived video quality. A few researchers have attempted to derive rate control algorithms with the combination of standard PSNR and ad-hoc perceptual metrics of video quality. The concept of using perceptual criterion for video coding was introduced in [7] within the context of perceptual adaptive quantization. In this work, quantization noise levels were adjusted such that more noise was allowed where it was less visible (busy and textured areas) while sensitive areas (typically flat and low detail regions) were finely quantized. Macro–blocks were classified into low detail, texture and edge areas depending on a classifier that studied the variance of sub-blocks within a macro-block (MB). The Rate models were trained from training sets of pre -classified video. One drawback of the above scheme as with standard PSNR was that neither accounts for the perceptual effect of motion. The work in [8] achieved this by assigning higher weights to the regions of the image that were experiencing the highest motion. Also, the center of the image and objects in the foreground are perceived as more important than the sides. However, attempts to use perceptual metrics for video quality have been limited by the accuracy of the video quality metrics chosen. In the recent years, new and improved metrics of subjective quality have been invented and their statistical accuracy has been studied in a formal manner. Particularly interesting is the work undertaken by ITU and the Video quality experts group (VQEG). VQEG conducted two phases of testing; in the first pha se, several algorithms were tested but they were not found to be very accurate, in fact none were found to be any more accurate than PSNR based metric. In the second phase of testing a few years later, a few new algorithms were experimented with, and it wa s concluded that four of these did achieve results good enough to warrant their standardization as a part of ITU –T Recommendation J.144. These experiments are referred to as the FR-TV (Full Reference Television) phase-II evaluations. ITU-T J.144 does not explicitly identify a single algorithm but provides guidelines on the selection of appropriate techniques to objectively measure subjective video quality. It describes four reference algorithms as well as PSNR. Amongst the four, the NTIA General Video Quality Model (VQM), [11] is the best performing and has been adopted by American National Standards Institute (ANSI) as a North American standard T1.801.03. NTIA’s approach has been to focus on defining parameters that model how humans perceive video quality. These parameters have been combined using linear models to produce estimates of video quality that closely approximate subjective test results. NTIA General Video Quality Model (VQM) has been proven to have strong correlation with subjective quality. In the first part of the thesis, we apply metrics motivated by NTIA-VQM model within a rate control algorithm to maximize perceptual video quality. We derive perceptual weights using key NTIA parameters to influence QP value used to decide degree of quantization. Our experiments demonstrate that a perceptual quality motivated standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 rate control algorithm that uses a PSNR metric. Our experimental results on a set of 11 sequences show on an average reduction of 6% in bitrate using the proposed algorithm for the same perceptual quality as standard TMN-8. The second part of our thesis work deals with spatial domain intra prediction used in advance video coding standard such as H.264. The H.264 Advanced Video coding standard [36] has been shown to achieve video quality similar to older standards such as MPEG2 and H.263 at nearly half the bit-rate. Generally, this compression improvement is attributed to several new tools that were introduced in H.264 – including spatial intra prediction, adaptive block size for motion compensation, in-loop de-blocking filter, context adaptive binary arithmetic coding (CABAC), and multiple reference frames. While the new tools allow better coding efficiency, they also introduce additi onal computational complexity at both encoder and decoder ends. We are especially concerned here on the impact of Intra prediction on the computational complexity of the encoder. H.264 reference implementations such as JM [29] search through all allowed intra-rediction “modes” in order to find the optimal mode. While this approach yields the optimal prediction mode, it comes at an extremely heavy computational cost. Hence there is a lot of interest into well -motivated algorithms that reduce the computational complexity of the search for the best prediction mode, while retaining the quality advantages of full-search Intra4x4. We propose a novel algorithm to reduce the complexity of full search by exploiting our knowledge of the source statistics. Specifically, we analyze the transform domain energy distribution of the original 4x4 block in different directions and use the results of our analysis to eliminate unlikely modes and reduce the search space for the optimal I ntra mode. Experimental results show that the proposed algorithm achieves quality metrics (PSNR) similar to full search at nearly a third of the complexity. This thesis has four chapters and is organized as follows, in the first chapter we introduce basics of video encoding and subsequently present exiting work in the area of perceptual rate control and introduce TMN-8 rate control algorithm in brief. At the end we introduce spatial domain intra prediction. In the second chapter we explain the challenges present in combining NTIA perceptual parameters with TMN8 rate control algorithm. We examine perceptual features used by NTIA from a video compression perspective and explain how the perceptual metrics capture typical compression artifacts. We next present a two pass perceptual rate control (PRCII) algorithm. Finally, we list experimental results on set of video sequences showing on an average of 6% bit-rate reduction by using PRC-II rate control over standard TMN-8 rate control. Chapter 3 contains part-II of our thesis work on, spatial domain intra prediction . We start by reviewing existing work in intra prediction and then present the details of our proposed intra prediction algorithm and experimental results. We finally conclude this thesis in chapter 4 and discuss direction for the future work on both our proposed algorithms.
403

Multiuser Transmission in Code Division Multiple Access Mobile Communications Systems

Irmer, Ralf 28 June 2005 (has links) (PDF)
Code Division Multiple Access (CDMA) is the technology used in all third generation cellular communications networks, and it is a promising candidate for the definition of fourth generation standards. The wireless mobile channel is usually frequency-selective causing interference among the users in one CDMA cell. Multiuser Transmission (MUT) algorithms for the downlink can increase the number of supportable users per cell, or decrease the necessary transmit power to guarantee a certain quality-of-service. Transmitter-based algorithms exploiting the channel knowledge in the transmitter are also motivated by information theoretic results like the Writing-on-Dirty-Paper theorem. The signal-to-noise ratio (SNR) is a reasonable performance criterion for noise-dominated scenarios. Using linear filters in the transmitter and the receiver, the SNR can be maximized with the proposed Eigenprecoder. Using multiple transmit and receive antennas, the performance can be significantly improved. The Generalized Selection Combining (GSC) MIMO Eigenprecoder concept enables reduced complexity transceivers. Methods eliminating the interference completely or minimizing the mean squared error exist for both the transmitter and the receiver. The maximum likelihood sequence detector in the receiver minimizes the bit error rate (BER), but it has no direct transmitter counterpart. The proposed Minimum Bit Error Rate Multiuser Transmission (TxMinBer) minimizes the BER at the detectors by transmit signal processing. This nonlinear approach uses the knowledge of the transmit data symbols and the wireless channel to calculate a transmit signal optimizing the BER with a transmit power constraint by nonlinear optimization methods like sequential quadratic programming (SQP). The performance of linear and nonlinear MUT algorithms with linear receivers is compared at the example of the TD-SCDMA standard. The interference problem can be solved with all MUT algorithms, but the TxMinBer approach requires less transmit power to support a certain number of users. The high computational complexity of MUT algorithms is also an important issue for their practical real-time application. The exploitation of structural properties of the system matrix reduces the complexity of the linear MUT mthods significantly. Several efficient methods to invert the ystem matrix are shown and compared. Proposals to reduce the omplexity of the Minimum Bit Error Rate Multiuser Transmission mehod are made, including a method avoiding the constraint by pase-only optimization. The complexity of the nonlinear methods i still some magnitudes higher than that of the linear MUT lgorithms, but further research on this topic and the increasing processing power of integrated circuits will eventually allow to exploit their better performance. / Der codegeteilte Mehrfachzugriff (CDMA) wird bei allen zellularen Mobilfunksystemen der dritten Generation verwendet und ist ein aussichtsreicher Kandidat für zukünftige Technologien. Die Netzkapazität, also die Anzahl der Nutzer je Funkzelle, ist durch auftretende Interferenzen zwischen den Nutzern begrenzt. Für die Aufwärtsstrecke von den mobilen Endgeräten zur Basisstation können die Interferenzen durch Verfahren der Mehrnutzerdetektion im Empfänger verringert werden. Für die Abwärtsstrecke, die höhere Datenraten bei Multimedia-Anwendungen transportiert, kann das Sendesignal im Sender so vorverzerrt werden, dass der Einfluß der Interferenzen minimiert wird. Die informationstheoretische Motivation liefert dazu das Writing-on-Dirty-Paper Theorem. Das Signal-zu-Rausch-Verhältnis ist ein geeignetes Kriterium für die Performanz in rauschdominierten Szenarien. Mit Sende- und Empfangsfiltern kann das SNR durch den vorgeschlagenen Eigenprecoder maximiert werden. Durch den Einsatz von Mehrfachantennen im Sender und Empfänger kann die Performanz signifikant erhöht werden. Mit dem Generalized Selection MIMO Eigenprecoder können Transceiver mit reduzierter Komplexität ermöglicht werden. Sowohl für den Empfänger als auch für den Sender existieren Methoden, die Interferenzen vollständig zu eliminieren, oder den mittleren quadratischen Fehler zu minimieren. Der Maximum-Likelihood-Empfänger minimiert die Bitfehlerwahrscheinlichkeit (BER), hat jedoch kein entsprechendes Gegenstück im Sender. Die in dieser Arbeit vorgeschlagene Minimum Bit Error Rate Multiuser Transmission (TxMinBer) minimiert die BER am Detektor durch Sendesignalverarbeitung. Dieses nichtlineare Verfahren nutzt die Kenntnis der Datensymbole und des Mobilfunkkanals, um ein Sendesignal zu generieren, dass die BER unter Berücksichtigung einer Sendeleistungsnebenbedingung minimiert. Dabei werden nichtlineare Optimierungsverfahren wie Sequentielle Quadratische Programmierung (SQP) verwendet. Die Performanz linearer und nichtlinearer MUT-Verfahren MUT-Algorithmen mit linearen Empfängern wird am Beispiel des TD-SCDMA-Standards verglichen. Das Problem der Interferenzen kann mit allen untersuchten Verfahren gelöst werden, die TxMinBer-Methode benötigt jedoch die geringste Sendeleistung, um eine bestimmt Anzahl von Nutzern zu unterstützen. Die hohe Rechenkomplexität der MUT-Algorithmen ist ein wichtiges Problem bei der Implementierung in Real-Zeit-Systemen. Durch die Ausnutzung von Struktureigenschaften der Systemmatrizen kann die Komplexität der linearen MUT-Verfahren signifikant reduziert werden. Verschiedene Verfahren zur Invertierung der Systemmatrizen werden aufgezeigt und verglichen. Es werden Vorschläge gemacht, die Komplexität der Minimum Bit Error Rate Multiuser Transmission zu reduzieren, u.a. durch Vermeidung der Sendeleistungsnebenbedingung durch eine Beschränkung der Optimierung auf die Phasen des Sendesignalvektors. Die Komplexität der nichtlinearen Methoden ist um einige Größenordungen höher als die der linearen Verfahren. Weitere Forschungsanstrengungen an diesem Thema sowie die wachsende Rechenleistung von integrierten Halbleitern werden künftig die Ausnutzung der besseren Leistungsfähigkeit der nichtlinearen MUT-Verfahren erlauben.
404

New signal processing approaches to peak-to-average power ratio reduction in multicarrier systems

Bae, Ki-taek 06 December 2010 (has links)
Multi-carrier systems based on orthogonal frequency division multiplexing (OFDM) are efficient technologies for the implementation of broadband wireless communication systems. OFDM is widely used and has been adopted for current mobile broadband wireless communication systems such as IEEE 802.a/g wireless LANs, WiMAX, 3GPP LTE, and DVB-T/H digital video broadcasting systems. Despite their many advantages, however, OFDM-based systems suffer from potentially high peak-to-average power ratio (PAR). Since communication systems typically include nonlinear devices such as RF power amplifiers (PA) and digital-to-analog converters (DAC), high PAR results in increased symbol error rates and spectral radiation. To mitigate these nonlinear effects and to avoid nonlinear saturation effects of the PA, the operating point of a signal with high peak power must be backed off into the linear region of the PA. This so-called output backoff (OBO) results in a reduced power conversion efficiency which limits the battery life for mobile applications, reduces the coverage range, and increases both the cost of the PA and power consumption in the cellular base station. With the increasing demand for high energy efficiency, low power consumption, and greenhouse gas emission reduction, PAR reduction is a key technique in the design of practical OFDM systems. Motivated by the PAR reduction problem associated with multi-carrier systems, such as OFDM, this research explores the state of the art of PAR reduction techniques and develops new signal processing techniques that can achieve a minimum PAR for given system parameters and that are compatible with the appropriate standards. The following are the three principal contributions of this dissertation research. First, we present and derive the semi-analytical results for the output of asymptotic iterative clipping and filtering. This work provides expressions and analytical techniques for estimating the attenuation factor, error vector magnitude, and bit-error-rate (BER), using a noise enhancement factor that is obtained by simulation. With these semi-analytical results, we obtain a relationship between the BER and the target clipping level for asymptotic iterative clipping and filtering. These results serve as a performance benchmark for designing PAR reduction techniques using iterative clipping and filtering in OFDM systems. Second, we analyze the impact of the selected mapping (SLM) technique on BER performance of OFDM systems in an additive white Gaussian noise channel in the presence of nonlinearity. We first derive a closed-form expression for the envelope power distribution in an OFDM system with SLM. Then, using this derived envelope power distribution, we investigate the BER performance and the total degradation (TD) of OFDM systems with SLM under the existence of nonlinearity. As a result, we obtain the TD-minimizing peak backoff (PBO) and clipping ratio as functions of the number of candidate signals in SLM. Third, we propose an adaptive clipping control algorithm and pilotaided algorithm to address a fundamental issue associated with two lowcomplexity PAR reduction techniques, namely, tone reservation (TR) and active constellation extension (ACE). Specifically, we discovered that the existing low-complexity algorithms have a low clipping ratio problem in that they can not achieve the minimum PAR when the target clipping level is set below the initially unknown optimum value. Using our proposed algorithms, we overcome this problem and demonstrate that additional PAR reduction is obtained for any low value of the initial target clipping ratio. / text
405

Επίδραση κλασικού θορύβου σε ανοικτά συστήματα συζευγμένων qubits

Τζέμος, Αθανάσιος 27 May 2014 (has links)
Στην παρούσα διδακτορική διατριβή μελετούμε αλληλεπιδρώντα ανοικτά κβαντικά συστήματα δύο καταστάσεων και μελετούμε τη συμπεριφορά των κβαντικών τους χαρακτηριστικών παρουσία κλασικού θορύβου. Πιο συγκεκριμένα, μελετούμε το χρόνο αποσύμπλεξης δύο qubits του σιδηρομαγνήτη XY του Heisenberg συναρτήσει της ισχύος κλασικού Gaussian λευκού θορύβου, με δεδομένες όλες τις υπόλοιπες παραμέτρους των συστημάτων. Το βασικό μας αποτέλεσμα είναι ότι όλα τα ενδιαφέροντα φαινόμενα που επάγονται απ’ το θόρυβο, δηλαδή ο στοχαστικός συντονισμός, ο στοχαστικός αντι-συντονισμός και το φαινόμενο της θωράκισης (noise shield), εξαρτώνται άμεσα από την αρχική προετοιμασία του συστήματος των δύο qubits είτε αυτό είναι μόνο του είτε είναι υποσύστημα μιας μεγαλύτερης δομής (αλυσίδας qubits). Επίσης παρατηρούμε ότι η θερμοκρασία του περιβάλλοντος μπορεί να αποτελέσει παράγοντα ελέγχου των ανωτέρω φαινομένων. Παρέχουμε ισχυρές ενδείξεις για την ανάγκη χαρτογράφησης του πίνακα πυκνότητας ενός κβαντικού ανοικτού συστήματος με βάση τα φαινόμενα του θορύβου που μπορεί να επιδείξει / In the current Ph.D. thesis we study interacting open quantum two state systems and study the behavior of their quantum features in the presence of classical noise. More specifically, we study the disentanglement time of two Heisenberg's XY ferromagnetic qubits against the strength of classical Gaussian white noise, with all of the other parameters of the system fixed. Our main result is that all of the interesting noise induced effects, i.e. stochastic resonance, stochastic anti-resonance and the noise shield effect, are directly related to the initial preparation of the system of two qubits whether this is alone, or a subsystem of a larger structure (chain qubits). We also notice that the environmental temperature may be used as a control factor of the above effects. We provide strong evidence for the necessity of mapping the density matrix of a quantum open system according to the noise effects it can present.
406

Indoor radio propagation modeling for system performance prediction

Luo, Meiling 17 July 2013 (has links) (PDF)
This thesis aims at proposing all the possible enhancements for the Multi-Resolution Frequency-Domain ParFlow (MR-FDPF) model. As a deterministic radio propagation model, the MR-FDPF model possesses the property of a high level of accuracy, but it also suffers from some common limitations of deterministic models. For instance, realistic radio channels are not deterministic but a kind of random processes due to, e.g. moving people or moving objects, thus they can not be completely described by a purely deterministic model. In this thesis, a semi-deterministic model is proposed based on the deterministic MR-FDPF model which introduces a stochastic part to take into account the randomness of realistic radio channels. The deterministic part of the semi-deterministic model is the mean path loss, and the stochastic part comes from the shadow fading and the small scale fading. Besides, many radio propagation simulators provide only the mean power predictions. However, only mean power is not enough to fully describe the behavior of radio channels. It has been shown that fading has also an important impact on the radio system performance. Thus, a fine radio propagation simulator should also be able to provide the fading information, and then an accurate Bit Error Rate (BER) prediction can be achieved. In this thesis, the fading information is extracted based on the MR-FDPF model and then a realistic BER is predicted. Finally, the realistic prediction of the BER allows the implementation of the adaptive modulation scheme. This has been done in the thesis for three systems, the Single-Input Single-Output (SISO) systems, the Maximum Ratio Combining (MRC) diversity systems and the wideband Orthogonal Frequency-Division Multiplexing (OFDM) systems.
407

Two-player interaction in quantum computing : cryptographic primitives & query complexity

Magnin, Loick 05 December 2011 (has links) (PDF)
This dissertation studies two different aspects of two-player interaction in the model of quantum communication and quantum computation.First, we study two cryptographic primitives, that are used as basic blocks to construct sophisticated cryptographic protocols between two players, e.g. identification protocols. The first primitive is ''quantum bit commitment''. This primitive cannot be done in an unconditionally secure way. However, security can be obtained by restraining the power of the two players. We study this primitive when the two players can only create quantum Gaussian states and perform Gaussian operations. These operations are a subset of what is allowed by quantum physics, and plays a central role in quantum optics. Hence, it is an accurate model of communication through optical fibers. We show that unfortunately this restriction does not allow secure bit commitment. The proof of this result is based on the notion of ''intrinsic purification'' that we introduce to circumvent the use of Uhlman's theorem when the quantum states are Gaussian. We then examine a weaker primitive, ''quantum weak coin flipping'', in the standard model of quantum computation. Mochon has showed that there exists such a protocol with arbitrarily small bias. We give a clear and meaningful interpretation of his proof. That allows us to present a drastically shorter and simplified proof.The second part of the dissertation deals with different methods of proving lower bounds on the quantum query complexity. This is a very important model in quantum complexity in which numerous results have been proved. In this model, an algorithm has restricted access to the input: it can only query individual bits. We consider a generalization of the standard model, where an algorithm does not compute a classical function, but generates a quantum state. This generalization allows us to compare the strength of the different methods used to prove lower bounds in this model. We first prove that the ''multiplicative adversary method'' is stronger than the ''additive adversary method''. We then show a reduction from the ''polynomial method'' to the multiplicative adversary method. Hence, we prove that the multiplicative adversary method is the strongest one. Adversary methods are usually difficult to use since they involve the computation of norms of matrices with very large size. We show how studying the symmetries of a problem can largely simplify these computations. Last, using these principles we prove the tight lower bound of the INDEX-ERASURE problem. This a quantum state generation problem that has links with the famous GRAPH-ISOMORPHISM problem.
408

Sources de lumière pour l'information quantique

Messin, Gaëtan 10 July 2008 (has links) (PDF)
L'information quantique et ses protocoles de cryptographie, de téléportation et de calcul ont trouvé dans les sources quantiques de lumière un ensemble d'outils à très fort potentiel. Les sources de photons uniques déclenchées en font évidement partie, tout comme les sources de photons jumeaux, sur lesquelles reposent l'émission de photons annoncés ou la production de paires de photons intriqués en polarisation. Les sources quantiques de lumière ne cessent de trouver de nouvelles applications comme par exemple l'intrication conditionnelle d'émetteurs de photons uniques par la mesure conjointe des photons qu'elles émettent, l'augmentation d'intrication de faisceaux EPR ou encore le stockage de photons uniques dans des vapeurs atomiques.<br /><br />L'ensemble de mes activités de recherche s'inscrit dans ce mouvement. Mes travaux ont porté en grande partie sur les sources de photons uniques et les sources de paires de photons, ainsi que leurs applications à la cryptographie quantique, à la téléportation quantique et au calcul quantique. Mes travaux s'ouvrent maintenant sur la suite: variables continues, mémoires quantiques et téléportation d'états non classiques sont probablement les prochaines étapes.
409

Performance evaluation and enhancement for AF two-way relaying in the presence of channel estimation error

Wang, Chenyuan 30 April 2012 (has links)
Cooperative relaying is a promising diversity achieving technique to provide reliable transmission, high throughput and extensive coverage for wireless networks in a variety of applications. Two-way relaying is a spectrally efficient protocol, providing one solution to overcome the half-duplex loss in one-way relay channels. Moreover, incorporating the multiple-input-multiple-output (MIMO) technology can further improve the spectral efficiency and diversity gain. A lot of related work has been performed on the two-way relay network (TWRN), but most of them assume perfect channel state information (CSI). In a realistic scenario, however, the channel is estimated and the estimation error exists. So in this thesis, we explicitly take into account the CSI error, and investigate its impact on the performance of amplify-and-forward (AF) TWRN where either multiple distributed single-antenna relays or a single multiple-antenna relay station is exploited. For the distributed relay network, we consider imperfect self-interference cancellation at both sources that exchange information with the help of multiple relays, and maximal ratio combining (MRC) is then applied to improve the decision statistics under imperfect signal detection. The system performance degradation in terms of outage probability and average bit-error rate (BER) are analyzed, as well as their asymptotic trend. To further improve the spectral efficiency while maintain the spatial diversity, we utilize the maximum minimum (Max-Min) relay selection (RS), and examine the impact of imperfect CSI on this single RS scheme. To mitigate the negative effect of imperfect CSI, we resort to adaptive power allocation (PA) by minimizing either the outage probability or the average BER, which can be cast as a Geometric Programming (GP) problem. Numerical results verify the correctness of our analysis and show that the adaptive PA scheme outperforms the equal PA scheme under the aggregated effect of imperfect CSI. When employing a single MIMO relay, the problem of robust MIMO relay design has been dealt with by considering the fact that only imperfect CSI is available. We design the MIMO relay based upon the CSI estimates, where the estimation errors are included to attain the robust design under the worst-case philosophy. The optimization problem corresponding to the robust MIMO relay design is shown to be nonconvex. This motivates the pursuit of semidefinite relaxation (SDR) coupled with the randomization technique to obtain computationally efficient high-quality approximate solutions. Numerical simulations compare the proposed MIMO relay with the existing nonrobust method, and therefore validate its robustness against the channel uncertainty. / Graduate
410

Solids of Revolution – from the Integration of a given Function to the Modelling of a Problem with the help of CAS and GeoGebra

Wurnig, Otto 22 May 2012 (has links) (PDF)
After the students in high school have learned to integrate a function, the calculation of the volume of a solid of revolution, like a rotated parabola, is taken as a good applied example. The next step is to calculate the volume of an object of reality which is interpreted as a solid of revolution of a given function f(x). The students do all these calculations in the same way and get the same result. Consequently the teachers can easily decide if a result is right or wrong. If the students have learned to work with a graphical or CAS calculator, they can calculate the volume of solids of revolution in reality by modelling a possible fitted function f(x). Every student has to decide which points of the curve that generates the solid of revolution can be taken and which function will suitably fit the curve. In Austrian high schools teachers use GeoGebra as a software which allows you to insert photographs or scanned material in the geometric window as a background picture. In this case the student and the teacher can control if the graph of the calculated function will fit the generating curve in a useful way.

Page generated in 0.0383 seconds