• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • 38
  • 25
  • 24
  • 5
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 437
  • 87
  • 68
  • 62
  • 56
  • 53
  • 46
  • 40
  • 40
  • 39
  • 38
  • 38
  • 37
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations

Reiß, Susanna 09 August 2012 (has links) (PDF)
This thesis deals with optimizing extremal eigenvalues of weighted graph Laplacian matrices. In general, the Laplacian matrix of a (weighted) graph is of particular importance in spectral graph theory and combinatorial optimization (e.g., graph partition like max-cut and graph bipartition). Especially the pioneering work of M. Fiedler investigates extremal eigenvalues of weighted graph Laplacians and provides close connections to the node- and edge-connectivity of a graph. Motivated by Fiedler, Göring et al. were interested in further connections between structural properties of the graph and the eigenspace of the second smallest eigenvalue of weighted graph Laplacians using a semidefinite optimization approach. By redistributing the edge weights of a graph, the following three optimization problems are studied in this thesis: maximizing the second smallest eigenvalue (based on the mentioned work of Göring et al.), minimizing the maximum eigenvalue and minimizing the difference of maximum and second smallest eigenvalue of the weighted Laplacian. In all three problems a semidefinite optimization formulation allows to interpret the corresponding semidefinite dual as a graph realization problem. That is, to each node of the graph a vector in the Euclidean space is assigned, fulfilling some constraints depending on the considered problem. Optimal realizations are investigated and connections to the eigenspaces of corresponding optimized eigenvalues are established. Furthermore, optimal realizations are closely linked to the separator structure of the graph. Depending on this structure, on the one hand folding properties of optimal realizations are characterized and on the other hand the existence of optimal realizations of bounded dimension is proven. The general bounds depend on the tree-width of the graph. In the case of minimizing the maximum eigenvalue, an important family of graphs are bipartite graphs, as an optimal one-dimensional realization may be constructed. Taking the symmetry of the graph into account, a particular optimal edge weighting exists. Considering the coupled problem, i.e., minimizing the difference of maximum and second smallest eigenvalue and the single problems, i.e., minimizing the maximum and maximizing the second smallest eigenvalue, connections between the feasible (optimal) sets are established.
362

Blind Detection Techniques For Spread Spectrum Audio Watermarking

Krishna Kumar, S 10 1900 (has links)
In spreads pectrum (SS)watermarking of audio signals, since the watermark acts as an additive noise to the host audio signal, the most important challenge is to maintain perceptual transparency. Human perception is a very sensitive apparatus, yet can be exploited to hide some information, reliably. SS watermark embedding has been proposed, in which psycho-acoustically shaped pseudo-random sequences are embedded directly into the time domain audio signal. However, these watermarking schemes use informed detection, in which the original signal is assumed available to the watermark detector. Blind detection of psycho-acoustically shaped SS watermarking is not well addressed in the literature. The problem is still interesting, because, blind detection is more practical for audio signals and, psycho-acoustically shaped watermarks embedding offers the maximum possible watermark energy under requirements of perceptual transparency. In this thesis we study the blind detection of psycho-acoustically shaped SS watermarks in time domain audio signals. We focus on a class of watermark sequences known as random phase watermarks, where the watermark magnitude spectrum is defined by the perceptual criteria and the randomness of the sequence lies in their phase spectrum. Blind watermark detectors, which do not have access to the original host signal, may seem handicapped, because an approximate watermark has to be re-derived from the watermarked signal. Since the comparison of blind detection with fully informed detection is unfair, a hypothetical detection scheme, denoted as semi-blind detection, is used as a reference benchmark. In semi-blind detection, the host signal as such is not available for detection, but it is assumed that sufficient information is available for deriving the exact watermark, which could be embedded in the given signal. Some reduction in performance is anticipated in blind detection over the semi-blind detection. Our experiments revealed that the statistical performance of the blind detector is better than that of the semi-blind detector. We analyze the watermark-to-host correlation (WHC) of random phase watermarks, and the results indicate that WHC is higher when a legitimate watermark is present in the audio signal, which leads to better detection performance. Based on these findings, we attempt to harness this increased correlation in order to further improve the performance. The analysis shows that uniformly distributed phase difference (between the host signal and the watermark) provides maximum advantage. This property is verified through experimentation over a variety of audio signals. In the second part, the correlated nature of audio signals is identified as a potential threat to reliable blind watermark detection, and audio pre-whitening methods are suggested as a possible remedy. A direct deterministic whitening (DDW) scheme is derived, from the frequency domain analysis of the time domain correlation process. Our experimental studies reveal that, the Savitzky-Golay Whitening (SGW), which is otherwise inferior to DDW technique, performs better when the audio signal is predominantly low pass. The novelty of this work lies in exploiting the complementary nature of the two whitening techniques and combining them to obtain a hybrid whitening (HbW) scheme. In the hybrid scheme the DDW and SGW techniques are selectively applied, based on short time spectral characteristics of the audio signal. The hybrid scheme extends the reliability of watermark detection to a wider range of audio signals. We also discuss enhancements to the HbW technique for robustness to temporal offsets and filtering. Robustness of SS watermark blind detection, with hybrid whitening, is determined through a set of experiments and the results are presented. It is seen that the watermarking scheme is robust to common signal processing operations such as additive noise, filtering, lossy compression, etc.
363

Diversity-Mutiplexing Tradeoff Of Asynchronous Cooperative Relay Networks And Diversity Embedded Coding Schemes

Naveen, N 07 1900 (has links)
This thesis consists of two parts addressing two different problems in fading channels. The first part deals with asynchronous cooperative relay communication. The assumption of nodes in a cooperative communication relay network operating in synchronous fashion is often unrealistic. In this work we consider two different models of asynchronous operation in cooperative-diversity networks experiencing slow fading and examine the corresponding Diversity-Multiplexing Tradeoffs (DMT). For both models, we propose protocols and distributed space-time codes that asymptotically achieve the transmit diversity bound on DMT for all multiplexing gains and for number of relays N ≥ 2. The distributed space-time codes for all the protocols considered are based on Cyclic Division Algebras (CDA). The second part of the work addresses the DMT analysis of diversity embedded codes for MIMO channels. Diversity embedded codes are high rate codes that are designed so that they have a high diversity code embedded within them. This allows a form of opportunistic communication depending on the channel conditions. The high diversity code ensures that at least a part of the information is received reliably, whereas the embedded high rate code allows additional information to be transferred if the channel is good. This can be thought of coding the data into two streams: high priority and low priority streams so that the high priority stream gets a better reliability than the lower priority stream. We show that superposition based diversity embedded codes in conjunction with naive single stream decoding is sub-optimal in terms of the DM tradeoff. We then construct explicit diversity embedded codes by the superposition of approximately universal space-time codes from CDAs. The relationship between broadcast channels and the diversity embedded setting is then utilized to provide some achievable Diversity Gain Region (DGR) for MIMO broadcast Channels.
364

Learning from biometric distances: Performance and security related issues in face recognition systems

Mohanty, Pranab 01 June 2007 (has links)
We present a theory for constructing linear, black box approximations to face recognition algorithms and empirically demonstrate that a surprisingly diverse set of face recognition approaches can be approximated well using a linear model. The construction of the linear model to a face recognition algorithm involves embedding of a training set of face images constrained by the distances between them, as computed by the face recognition algorithm being approximated. We accomplish this embedding by iterative majorization, initialized by classical multi-dimensional scaling (MDS). We empirically demonstrate the adequacy of the linear model using six face recognition algorithms, spanning both template based and feature based approaches on standard face recognition benchmarks such as the Facial Recognition Technology (FERET) and Face Recognition Grand Challenge (FRGC) data sets. The experimental results show that the average Error in Modeling for six algorithms is 6.3% at 0.001 False Acceptance Rate (FAR), for FERET fafb probe set which contains maximum number of subjects among all the probe sets. We demonstrate the usefulness of the linear model for algorithm dependent indexing of face databases and find that it results in more than 20 times reduction in face comparisons for Bayesian Intra/Extra-class person classifier (BAY), Elastic Bunch Graph Matching algorithm (EBGM), and the commercial face recognition algorithms. We also propose a novel paradigm to reconstruct face templates from match scores using the linear model and use the reconstructed templates to explore the security breach in a face recognition system. We evaluate the proposed template reconstruction scheme using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA), Bayesian Intra/Extra-class person classifier (BAY), and a feature based commercial algorithm. With an operational point set at 1% False Acceptance Rate (FAR) and 99% True Acceptance Rate (TAR) for 1196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve 73%, 72% and 100% chance of breaking in as a randomly chosen target subject for the commercial, BAY and PCA based face recognition system, respectively. We also show that the proposed reconstruction scheme has 47% more probability of breaking in as a randomly chosen target subject for the commercial system as compared to a hill climbing approach with the same number of attempts.
365

Robust watermarking techniques for stereoscopic video protection

Chammem, Afef 27 May 2013 (has links) (PDF)
The explosion in stereoscopic video distribution increases the concerns over its copyright protection. Watermarking can be considered as the most flexible property right protection technology. The watermarking applicative issue is to reach the trade-off between the properties of transparency, robustness, data payload and computational cost. While the capturing and displaying of the 3D content are solely based on the two left/right views, some alternative representations, like the disparity maps should also be considered during transmission/storage. A specific study on the optimal (with respect to the above-mentioned properties) insertion domain is also required. The present thesis tackles the above-mentioned challenges. First, a new disparity map (3D video-New Three Step Search - 3DV-NTSS) is designed. The performances of the 3DV-NTSS were evaluated in terms of visual quality of the reconstructed image and computational cost. When compared with state of the art methods (NTSS and FS-MPEG) average gains of 2dB in PSNR and 0.1 in SSIM are obtained. The computational cost is reduced by average factors between 1.3 and 13. Second, a comparative study on the main classes of 2D inherited watermarking methods and on their related optimal insertion domains is carried out. Four insertion methods are considered; they belong to the SS, SI and hybrid (Fast-IProtect) families. The experiments brought to light that the Fast-IProtect performed in the new disparity map domain (3DV-NTSS) would be generic enough so as to serve a large variety of applications. The statistical relevance of the results is given by the 95% confidence limits and their underlying relative errors lower than er<0.1
366

Two dimensional Maximal Supergravity, Consistent Truncations and Holography

Ortiz, Thomas 07 July 2014 (has links) (PDF)
A complete non trivial supersymmetric deformation of the maximal supergravity in two dimensions is achieved by the gauging of a SO(9) group. The resulting theory describes the reduction of type IIA supergravity on an AdS_2 x S^8 background and is of first importance in the Domain-Wall / Quantum Field theory correspondence for the D0-brane case. To prepare the construction of the SO(9) gauged maximal supergravity, we focus on the eleven dimensional supergravity and the maximal supergravity in three dimensions since they give rise to important off-shell inequivalent formulations of the ungauged theory in two dimensions. The embedding tensor formalism is presented, allowing for a general desciption of the gaugings consistent with supersymmetry. The SO(9) supergravity is explicitly constructed and applications are considered. In particular, an embedding of the bosonic sector of the two-dimensional theory into type IIA supergravity is obtained. Hence, the Cartan truncation of the SO(9) supergravity is proved to be consistent. This motivated holographic applications. Therefore, correlation functions for operators in dual Matrix models are derived from the study of gravity side excitations around half BPS backgrounds. These results are fully discussed and outlooks are presented.
367

Archivage Sécurisé d'Images

Motsch, Jean 27 November 2008 (has links) (PDF)
La multiplication des dispositifs de prise de vues, classiques comme les appareils photographiques, nouveaux comme les imageurs IRM et les analyseurs multispectraux, amène à prendre en charge des volumes de données croissants exponentiellement. En corollaire, la nécessité d'archiver de telles quantités de données introduit des questions nouvelles. Comment gérer des images de très grande taille ? Comment garantir l'intégrité des données enregistrées ? Quel mécanisme utiliser pour lier une image à des données ? Pendant longtemps, l'accent a été mis sur les performances de compression des schémas de codage, avec pour instruments de mesure les courbes débit-distorsion et quelques évaluations subjectives. Des propriétés de scalabilité, de gestion du débit, de résistance aux erreurs sont apparus pour prendre en compte les évolutions des techniques relatives aux media de communication. Cependant, la prise en compte des impératifs liés à l'archivage amène à considérer de nouveaux services, comme la gestion de la copie, l'ajout de métadonnées et la sécurité des images. Dans la plupart des cas, ces services sont fournis en dehors des schémas de compression, à l'exception notable de JPSEC, effort lié à JPEG-2000. L'approche retenue dans cette étude est l'intégration dans un codeur d'images fixes performant, le codeur LAR, des primitives de chiffrement et d'insertion de données permettant d'assurer l'essentiel des services liés à l'archivage des images. Le codeur LAR est hiérarchique, scalable en résolution et qualité, et permet d'aller du codage avec perte vers du codage sans perte. Ses performances sont au-delà de l'état de l'art, particulièrement en très bas débit et en compression sans perte. Le point clef des performances de ce codeur est un partitionnement quadtree adapté au contenu sémantique de l'image. Dans le contexte de l'archivage sécurisé d'images, ce document présente donc le tryptique suivant : compression efficace, chiffrement partiel et insertion de données. Pour la compression, l'utilisation d'une transformée de Hadamard réversible alliée à des filtres non linéaires permet d'obtenir de bonnes performances, avec ou sans pertes. Pour le chiffrement, l'utilisation de la structure hiérarchique du codeur LAR permet de mettre en place un schéma de chiffrement partiel. Il autorise une gestion fine des droits et des clefs, ainsi que le choix dans les niveaux de résolution et qualité possibles. Des éléments permettant une protection à coût nul sont également présentés. Pour l'insertion de données, le parallèle existant entre le LAR-Interleaved S+P et l'extension de la différence permet de mettre en œuvre un schéma efficace.
368

Identification de la variabilité spatiale des champs de contraintes dans les agrégats polycristallins et application à l'approche locale de la rupture

Dang, Xuan Hung 11 October 2012 (has links) (PDF)
Cette thèse est une contribution à la construction de l'Approche Locale de la rupture à l'échelle microscopique à l'aide de la modélisation d'agrégats polycristallins. Elle consiste à prendre en compte la variabilité spatiale de la microstructure du matériau. Pour ce faire, la modélisation micromécanique du matériau est réalisée par la simulation d'agrégats polycristallins par éléments finis. Les champs aléatoires de contrainte (principale maximale et de clivage) dans le matériau qui représentent la variabilité spatiale de la microstructure sont ensuite modélisés par un champ aléatoire gaussien stationnaire ergodique. Les propriétés de variabilité spatiale de ces champs sont identifiés par une méthode d'identification, e.g. méthode du périodogramme, méthode du variogramme, méthode du maximum de vraisemblance. Des réalisations synthétiques des champs de contraintes sont ensuite simulées par une méthode de simulation, e.g. méthode Karhunen-Loève discrète, méthode "Circulant Embedding", méthode spectrale, sans nouveau calcul aux éléments finis. Enfin, le modèle d'Approche Locale de la rupture par simulation de champ de contrainte de clivage permettant d'y intégrer les réalisations simulées du champ est construit pour estimer la probabilité de rupture du matériau.
369

Análise de estabilidade de sistemas dinâmicos híbridos e descontínuos modelados por semigrupos:

Pena, Ismael da Silva [UNESP] 26 February 2008 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:26:55Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-02-26Bitstream added on 2014-06-13T18:30:53Z : No. of bitstreams: 1 pena_is_me_sjrp.pdf: 488383 bytes, checksum: 40a97f3540caa6b8f6f2691c3a402579 (MD5) / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Sistemas dinâmicos híbridos se diferenciam por exibir simultaneamente variados tipos de comportamento dinâmico (contínuo, discreto, eventos discretos) em diferentes partes do sistema. Neste trabalho foram estudados resultados de estabilidade no sentido de Lyapunov para sistemas dinâmicos híbridos gerais, que utilizam uma noção de tempo generalizado, definido em um espaço métrico totalmente ordenado. Mostrou-se que estes sistemas podem ser imersos em sistemas dinâmicos descontínuos definidos em R+, de forma que sejam preservadas suas propriedades qualitativas. Como foco principal, estudou-se resultados de estabilidade para sistemas dinâmicos descontínuos modelados por semigrupos de operadores, em que os estados do sistema pertencem à espaços de Banach. Neste caso, de forma alternativa à teoria clássica de estabilidade, os resultados não utilizam as usuais funções de Lyapunov, sendo portanto mais fáceis de se aplicar, tendo em vista a dificuldade em se encontrar tais funções para muitos sistemas. Além disso, os resultados foram aplicados à uma classe de equações diferenciais com retardo. / Hybrid dynamical systems are characterized for showing simultaneously a variety of dynamic behaviors (continuous, discrete, discrete events) in different parts of the System. This work discusses stability results in the Lyapunov sense for general hybrid dynamical systems that use a generalized notion of time, defined in a completely ordered metric space. It has been shown that these systems may be immersed in discontinuous dynamical systems defined in R+, so that their quality properties are preserved. As the main focus, it is studied stability results for discontinuous dynamical systems modeled by semigroup operators, in which the states belong to Banach spaces. In this case, an alternative to the classical theory of stability, the results do not make use of the usual Lyapunov functions, and therefore are easier to apply, in view of the difficulty in finding such functions for many systems. Furthermore, the results were applied to a class of time-delay discontinuous differential equations.
370

Induction de lexiques bilingues à partir de corpus comparables et parallèles

Jakubina, Laurent 07 1900 (has links)
No description available.

Page generated in 0.0439 seconds